00:00:00.001 Started by upstream project "autotest-per-patch" build number 126170 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23925 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/40/22240/21 # timeout=5 00:00:07.122 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.135 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.150 Checking out Revision b0ebb039b16703d64cc7534b6e0fa0780ed1e683 (FETCH_HEAD) 00:00:07.150 > git config core.sparsecheckout # timeout=10 00:00:07.164 > git read-tree -mu HEAD # timeout=10 00:00:07.181 > git checkout -f b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=5 00:00:07.209 Commit message: "jenkins/jjb-config: Add support for native DPDK build into docker-autoruner" 00:00:07.209 > git rev-list --no-walk 055051402f6bd793109ccc450ac2f885bb0fdaeb # timeout=10 00:00:07.302 [Pipeline] Start of Pipeline 00:00:07.316 [Pipeline] library 00:00:07.318 Loading library shm_lib@master 00:00:07.318 Library shm_lib@master is cached. Copying from home. 00:00:07.336 [Pipeline] node 00:00:07.347 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.348 [Pipeline] { 00:00:07.358 [Pipeline] catchError 00:00:07.359 [Pipeline] { 00:00:07.371 [Pipeline] wrap 00:00:07.382 [Pipeline] { 00:00:07.391 [Pipeline] stage 00:00:07.393 [Pipeline] { (Prologue) 00:00:07.595 [Pipeline] sh 00:00:07.881 + logger -p user.info -t JENKINS-CI 00:00:07.901 [Pipeline] echo 00:00:07.902 Node: GP8 00:00:07.909 [Pipeline] sh 00:00:08.208 [Pipeline] setCustomBuildProperty 00:00:08.218 [Pipeline] echo 00:00:08.220 Cleanup processes 00:00:08.225 [Pipeline] sh 00:00:08.507 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.507 2814383 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.518 [Pipeline] sh 00:00:08.815 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.815 ++ grep -v 'sudo pgrep' 00:00:08.815 ++ awk '{print $1}' 00:00:08.815 + sudo kill -9 00:00:08.815 + true 00:00:08.831 [Pipeline] cleanWs 00:00:08.842 [WS-CLEANUP] Deleting project workspace... 00:00:08.842 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.849 [WS-CLEANUP] done 00:00:08.854 [Pipeline] setCustomBuildProperty 00:00:08.870 [Pipeline] sh 00:00:09.152 + sudo git config --global --replace-all safe.directory '*' 00:00:09.219 [Pipeline] httpRequest 00:00:09.258 [Pipeline] echo 00:00:09.259 Sorcerer 10.211.164.101 is alive 00:00:09.265 [Pipeline] httpRequest 00:00:09.270 HttpMethod: GET 00:00:09.270 URL: http://10.211.164.101/packages/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:09.271 Sending request to url: http://10.211.164.101/packages/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:09.279 Response Code: HTTP/1.1 200 OK 00:00:09.280 Success: Status code 200 is in the accepted range: 200,404 00:00:09.280 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:11.907 [Pipeline] sh 00:00:12.190 + tar --no-same-owner -xf jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:12.205 [Pipeline] httpRequest 00:00:12.234 [Pipeline] echo 00:00:12.236 Sorcerer 10.211.164.101 is alive 00:00:12.246 [Pipeline] httpRequest 00:00:12.251 HttpMethod: GET 00:00:12.251 URL: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:12.252 Sending request to url: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:12.274 Response Code: HTTP/1.1 200 OK 00:00:12.275 Success: Status code 200 is in the accepted range: 200,404 00:00:12.275 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:01:41.779 [Pipeline] sh 00:01:42.066 + tar --no-same-owner -xf spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:01:45.367 [Pipeline] sh 00:01:45.659 + git -C spdk log --oneline -n5 00:01:45.659 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:45.659 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:45.659 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:45.659 719d03c6a sock/uring: only register net impl if supported 00:01:45.659 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:45.672 [Pipeline] } 00:01:45.690 [Pipeline] // stage 00:01:45.703 [Pipeline] stage 00:01:45.706 [Pipeline] { (Prepare) 00:01:45.728 [Pipeline] writeFile 00:01:45.747 [Pipeline] sh 00:01:46.032 + logger -p user.info -t JENKINS-CI 00:01:46.045 [Pipeline] sh 00:01:46.329 + logger -p user.info -t JENKINS-CI 00:01:46.343 [Pipeline] sh 00:01:46.631 + cat autorun-spdk.conf 00:01:46.631 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.631 SPDK_TEST_NVMF=1 00:01:46.631 SPDK_TEST_NVME_CLI=1 00:01:46.631 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.631 SPDK_TEST_NVMF_NICS=e810 00:01:46.631 SPDK_TEST_VFIOUSER=1 00:01:46.631 SPDK_RUN_UBSAN=1 00:01:46.631 NET_TYPE=phy 00:01:46.639 RUN_NIGHTLY=0 00:01:46.646 [Pipeline] readFile 00:01:46.678 [Pipeline] withEnv 00:01:46.681 [Pipeline] { 00:01:46.697 [Pipeline] sh 00:01:46.984 + set -ex 00:01:46.984 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:46.984 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.984 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.984 ++ SPDK_TEST_NVMF=1 00:01:46.984 ++ SPDK_TEST_NVME_CLI=1 00:01:46.984 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.984 ++ SPDK_TEST_NVMF_NICS=e810 00:01:46.984 ++ SPDK_TEST_VFIOUSER=1 00:01:46.984 ++ SPDK_RUN_UBSAN=1 00:01:46.984 ++ NET_TYPE=phy 00:01:46.984 ++ RUN_NIGHTLY=0 00:01:46.984 + case $SPDK_TEST_NVMF_NICS in 00:01:46.984 + DRIVERS=ice 00:01:46.984 + [[ tcp == \r\d\m\a ]] 00:01:46.984 + [[ -n ice ]] 00:01:46.984 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:46.984 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:46.984 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:46.984 rmmod: ERROR: Module irdma is not currently loaded 00:01:46.984 rmmod: ERROR: Module i40iw is not currently loaded 00:01:46.984 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:46.984 + true 00:01:46.984 + for D in $DRIVERS 00:01:46.984 + sudo modprobe ice 00:01:46.984 + exit 0 00:01:46.994 [Pipeline] } 00:01:47.017 [Pipeline] // withEnv 00:01:47.024 [Pipeline] } 00:01:47.045 [Pipeline] // stage 00:01:47.058 [Pipeline] catchError 00:01:47.060 [Pipeline] { 00:01:47.080 [Pipeline] timeout 00:01:47.080 Timeout set to expire in 50 min 00:01:47.082 [Pipeline] { 00:01:47.100 [Pipeline] stage 00:01:47.102 [Pipeline] { (Tests) 00:01:47.120 [Pipeline] sh 00:01:47.407 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.407 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.407 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.407 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:47.407 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.407 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:47.407 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:47.407 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:47.407 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:47.407 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:47.407 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:47.407 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.407 + source /etc/os-release 00:01:47.407 ++ NAME='Fedora Linux' 00:01:47.407 ++ VERSION='38 (Cloud Edition)' 00:01:47.407 ++ ID=fedora 00:01:47.407 ++ VERSION_ID=38 00:01:47.407 ++ VERSION_CODENAME= 00:01:47.407 ++ PLATFORM_ID=platform:f38 00:01:47.407 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:47.407 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.407 ++ LOGO=fedora-logo-icon 00:01:47.407 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:47.407 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.407 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:47.407 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.407 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.407 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.407 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:47.407 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.407 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:47.407 ++ SUPPORT_END=2024-05-14 00:01:47.407 ++ VARIANT='Cloud Edition' 00:01:47.407 ++ VARIANT_ID=cloud 00:01:47.407 + uname -a 00:01:47.407 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:47.407 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:48.785 Hugepages 00:01:48.785 node hugesize free / total 00:01:48.785 node0 1048576kB 0 / 0 00:01:48.785 node0 2048kB 0 / 0 00:01:48.785 node1 1048576kB 0 / 0 00:01:48.785 node1 2048kB 0 / 0 00:01:48.785 00:01:48.785 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:48.785 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:48.785 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:48.785 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:48.785 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:48.785 + rm -f /tmp/spdk-ld-path 00:01:48.785 + source autorun-spdk.conf 00:01:48.785 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.785 ++ SPDK_TEST_NVMF=1 00:01:48.785 ++ SPDK_TEST_NVME_CLI=1 00:01:48.785 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.785 ++ SPDK_TEST_NVMF_NICS=e810 00:01:48.785 ++ SPDK_TEST_VFIOUSER=1 00:01:48.785 ++ SPDK_RUN_UBSAN=1 00:01:48.785 ++ NET_TYPE=phy 00:01:48.785 ++ RUN_NIGHTLY=0 00:01:48.785 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:48.785 + [[ -n '' ]] 00:01:48.785 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.785 + for M in /var/spdk/build-*-manifest.txt 00:01:48.785 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:48.785 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:48.785 + for M in /var/spdk/build-*-manifest.txt 00:01:48.785 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:48.785 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:48.785 ++ uname 00:01:48.785 + [[ Linux == \L\i\n\u\x ]] 00:01:48.785 + sudo dmesg -T 00:01:48.785 + sudo dmesg --clear 00:01:48.785 + dmesg_pid=2815684 00:01:48.785 + [[ Fedora Linux == FreeBSD ]] 00:01:48.785 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.785 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.785 + sudo dmesg -Tw 00:01:48.785 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:48.785 + [[ -x /usr/src/fio-static/fio ]] 00:01:48.785 + export FIO_BIN=/usr/src/fio-static/fio 00:01:48.785 + FIO_BIN=/usr/src/fio-static/fio 00:01:48.785 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:48.785 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:48.785 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:48.785 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.785 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.785 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:48.785 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.785 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.785 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:48.785 Test configuration: 00:01:48.785 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.785 SPDK_TEST_NVMF=1 00:01:48.785 SPDK_TEST_NVME_CLI=1 00:01:48.785 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.785 SPDK_TEST_NVMF_NICS=e810 00:01:48.785 SPDK_TEST_VFIOUSER=1 00:01:48.785 SPDK_RUN_UBSAN=1 00:01:48.785 NET_TYPE=phy 00:01:48.785 RUN_NIGHTLY=0 11:27:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:48.785 11:27:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:48.785 11:27:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:48.785 11:27:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:48.785 11:27:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.785 11:27:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.785 11:27:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.785 11:27:56 -- paths/export.sh@5 -- $ export PATH 00:01:48.785 11:27:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.785 11:27:56 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:48.785 11:27:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:48.785 11:27:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721035676.XXXXXX 00:01:48.785 11:27:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721035676.mFcT9O 00:01:48.785 11:27:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:48.785 11:27:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:48.785 11:27:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:48.785 11:27:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:48.785 11:27:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:48.785 11:27:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:48.785 11:27:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:48.785 11:27:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.785 11:27:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:48.785 11:27:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:48.785 11:27:56 -- pm/common@17 -- $ local monitor 00:01:48.785 11:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.785 11:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.785 11:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.785 11:27:56 -- pm/common@21 -- $ date +%s 00:01:48.785 11:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.785 11:27:56 -- pm/common@21 -- $ date +%s 00:01:48.785 11:27:56 -- pm/common@25 -- $ sleep 1 00:01:48.785 11:27:56 -- pm/common@21 -- $ date +%s 00:01:48.785 11:27:56 -- pm/common@21 -- $ date +%s 00:01:48.785 11:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035676 00:01:48.785 11:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035676 00:01:48.785 11:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035676 00:01:48.785 11:27:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035676 00:01:48.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035676_collect-vmstat.pm.log 00:01:48.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035676_collect-cpu-load.pm.log 00:01:48.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035676_collect-cpu-temp.pm.log 00:01:48.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035676_collect-bmc-pm.bmc.pm.log 00:01:49.725 11:27:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:49.725 11:27:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:49.725 11:27:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:49.725 11:27:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.725 11:27:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:49.725 Mon Jul 15 09:27:57 AM UTC 2024 00:01:49.725 11:27:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.725 v24.09-pre-205-ge7cce062d 00:01:49.725 11:27:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:49.725 11:27:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:49.725 11:27:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:49.725 11:27:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.725 11:27:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.725 11:27:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.725 ************************************ 00:01:49.725 START TEST ubsan 00:01:49.725 ************************************ 00:01:49.725 11:27:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:49.725 using ubsan 00:01:49.725 00:01:49.725 real 0m0.000s 00:01:49.725 user 0m0.000s 00:01:49.725 sys 0m0.000s 00:01:49.725 11:27:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:49.725 11:27:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.725 ************************************ 00:01:49.725 END TEST ubsan 00:01:49.725 ************************************ 00:01:49.725 11:27:57 -- common/autotest_common.sh@1142 -- $ return 0 00:01:49.725 11:27:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:49.725 11:27:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.725 11:27:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.725 11:27:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:49.984 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.984 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.244 Using 'verbs' RDMA provider 00:02:00.870 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:10.847 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:10.847 Creating mk/config.mk...done. 00:02:10.847 Creating mk/cc.flags.mk...done. 00:02:10.847 Type 'make' to build. 00:02:10.847 11:28:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:10.847 11:28:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:10.847 11:28:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.847 11:28:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.847 ************************************ 00:02:10.847 START TEST make 00:02:10.847 ************************************ 00:02:10.847 11:28:18 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:10.847 make[1]: Nothing to be done for 'all'. 00:02:12.237 The Meson build system 00:02:12.237 Version: 1.3.1 00:02:12.237 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:12.237 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:12.237 Build type: native build 00:02:12.237 Project name: libvfio-user 00:02:12.237 Project version: 0.0.1 00:02:12.237 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.237 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.237 Host machine cpu family: x86_64 00:02:12.237 Host machine cpu: x86_64 00:02:12.237 Run-time dependency threads found: YES 00:02:12.237 Library dl found: YES 00:02:12.237 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.237 Run-time dependency json-c found: YES 0.17 00:02:12.237 Run-time dependency cmocka found: YES 1.1.7 00:02:12.237 Program pytest-3 found: NO 00:02:12.237 Program flake8 found: NO 00:02:12.237 Program misspell-fixer found: NO 00:02:12.237 Program restructuredtext-lint found: NO 00:02:12.237 Program valgrind found: YES (/usr/bin/valgrind) 00:02:12.237 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.237 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.237 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.237 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.237 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:12.237 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:12.237 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.237 Build targets in project: 8 00:02:12.237 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:12.237 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:12.237 00:02:12.237 libvfio-user 0.0.1 00:02:12.237 00:02:12.237 User defined options 00:02:12.237 buildtype : debug 00:02:12.237 default_library: shared 00:02:12.237 libdir : /usr/local/lib 00:02:12.237 00:02:12.237 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.187 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:13.187 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:13.187 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:13.187 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:13.187 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:13.187 [5/37] Compiling C object samples/null.p/null.c.o 00:02:13.187 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:13.187 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:13.187 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:13.187 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:13.187 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:13.450 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:13.450 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:13.450 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:13.450 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:13.450 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:13.450 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:13.450 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:13.450 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:13.450 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:13.450 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:13.450 [21/37] Compiling C object samples/server.p/server.c.o 00:02:13.450 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:13.450 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:13.450 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:13.450 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:13.450 [26/37] Compiling C object samples/client.p/client.c.o 00:02:13.450 [27/37] Linking target samples/client 00:02:13.710 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:13.710 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:13.710 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:13.710 [31/37] Linking target test/unit_tests 00:02:13.969 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:13.969 [33/37] Linking target samples/server 00:02:13.969 [34/37] Linking target samples/null 00:02:13.969 [35/37] Linking target samples/gpio-pci-idio-16 00:02:13.969 [36/37] Linking target samples/lspci 00:02:13.969 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:13.969 INFO: autodetecting backend as ninja 00:02:13.969 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:13.969 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:14.919 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.919 ninja: no work to do. 00:02:19.094 The Meson build system 00:02:19.094 Version: 1.3.1 00:02:19.094 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:19.094 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:19.094 Build type: native build 00:02:19.094 Program cat found: YES (/usr/bin/cat) 00:02:19.094 Project name: DPDK 00:02:19.094 Project version: 24.03.0 00:02:19.094 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:19.094 C linker for the host machine: cc ld.bfd 2.39-16 00:02:19.094 Host machine cpu family: x86_64 00:02:19.094 Host machine cpu: x86_64 00:02:19.094 Message: ## Building in Developer Mode ## 00:02:19.094 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.094 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:19.094 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.094 Program python3 found: YES (/usr/bin/python3) 00:02:19.094 Program cat found: YES (/usr/bin/cat) 00:02:19.094 Compiler for C supports arguments -march=native: YES 00:02:19.094 Checking for size of "void *" : 8 00:02:19.094 Checking for size of "void *" : 8 (cached) 00:02:19.094 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:19.094 Library m found: YES 00:02:19.094 Library numa found: YES 00:02:19.094 Has header "numaif.h" : YES 00:02:19.094 Library fdt found: NO 00:02:19.094 Library execinfo found: NO 00:02:19.094 Has header "execinfo.h" : YES 00:02:19.094 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:19.094 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.094 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.094 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.094 Run-time dependency openssl found: YES 3.0.9 00:02:19.094 Run-time dependency libpcap found: YES 1.10.4 00:02:19.094 Has header "pcap.h" with dependency libpcap: YES 00:02:19.094 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.094 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.094 Compiler for C supports arguments -Wformat: YES 00:02:19.094 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.094 Compiler for C supports arguments -Wformat-security: NO 00:02:19.094 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.094 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.094 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.094 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.094 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.094 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.094 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.094 Compiler for C supports arguments -Wundef: YES 00:02:19.094 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.094 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.094 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.094 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.094 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.094 Program objdump found: YES (/usr/bin/objdump) 00:02:19.094 Compiler for C supports arguments -mavx512f: YES 00:02:19.094 Checking if "AVX512 checking" compiles: YES 00:02:19.094 Fetching value of define "__SSE4_2__" : 1 00:02:19.094 Fetching value of define "__AES__" : 1 00:02:19.094 Fetching value of define "__AVX__" : 1 00:02:19.094 Fetching value of define "__AVX2__" : (undefined) 00:02:19.094 Fetching value of define "__AVX512BW__" : (undefined) 00:02:19.094 Fetching value of define "__AVX512CD__" : (undefined) 00:02:19.094 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:19.094 Fetching value of define "__AVX512F__" : (undefined) 00:02:19.094 Fetching value of define "__AVX512VL__" : (undefined) 00:02:19.094 Fetching value of define "__PCLMUL__" : 1 00:02:19.094 Fetching value of define "__RDRND__" : 1 00:02:19.094 Fetching value of define "__RDSEED__" : (undefined) 00:02:19.094 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.094 Fetching value of define "__znver1__" : (undefined) 00:02:19.094 Fetching value of define "__znver2__" : (undefined) 00:02:19.094 Fetching value of define "__znver3__" : (undefined) 00:02:19.094 Fetching value of define "__znver4__" : (undefined) 00:02:19.094 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.094 Message: lib/log: Defining dependency "log" 00:02:19.094 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.094 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.094 Checking for function "getentropy" : NO 00:02:19.094 Message: lib/eal: Defining dependency "eal" 00:02:19.094 Message: lib/ring: Defining dependency "ring" 00:02:19.094 Message: lib/rcu: Defining dependency "rcu" 00:02:19.094 Message: lib/mempool: Defining dependency "mempool" 00:02:19.094 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.094 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.094 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.094 Compiler for C supports arguments -mpclmul: YES 00:02:19.094 Compiler for C supports arguments -maes: YES 00:02:19.094 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.094 Compiler for C supports arguments -mavx512bw: YES 00:02:19.094 Compiler for C supports arguments -mavx512dq: YES 00:02:19.094 Compiler for C supports arguments -mavx512vl: YES 00:02:19.094 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.094 Compiler for C supports arguments -mavx2: YES 00:02:19.094 Compiler for C supports arguments -mavx: YES 00:02:19.094 Message: lib/net: Defining dependency "net" 00:02:19.094 Message: lib/meter: Defining dependency "meter" 00:02:19.094 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.094 Message: lib/pci: Defining dependency "pci" 00:02:19.094 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.094 Message: lib/hash: Defining dependency "hash" 00:02:19.095 Message: lib/timer: Defining dependency "timer" 00:02:19.095 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.095 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.095 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.095 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.095 Message: lib/power: Defining dependency "power" 00:02:19.095 Message: lib/reorder: Defining dependency "reorder" 00:02:19.095 Message: lib/security: Defining dependency "security" 00:02:19.095 Has header "linux/userfaultfd.h" : YES 00:02:19.095 Has header "linux/vduse.h" : YES 00:02:19.095 Message: lib/vhost: Defining dependency "vhost" 00:02:19.095 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.095 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.095 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.095 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.095 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:19.095 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:19.095 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:19.095 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:19.095 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:19.095 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:19.095 Program doxygen found: YES (/usr/bin/doxygen) 00:02:19.095 Configuring doxy-api-html.conf using configuration 00:02:19.095 Configuring doxy-api-man.conf using configuration 00:02:19.095 Program mandb found: YES (/usr/bin/mandb) 00:02:19.095 Program sphinx-build found: NO 00:02:19.095 Configuring rte_build_config.h using configuration 00:02:19.095 Message: 00:02:19.095 ================= 00:02:19.095 Applications Enabled 00:02:19.095 ================= 00:02:19.095 00:02:19.095 apps: 00:02:19.095 00:02:19.095 00:02:19.095 Message: 00:02:19.095 ================= 00:02:19.095 Libraries Enabled 00:02:19.095 ================= 00:02:19.095 00:02:19.095 libs: 00:02:19.095 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.095 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:19.095 cryptodev, dmadev, power, reorder, security, vhost, 00:02:19.095 00:02:19.095 Message: 00:02:19.095 =============== 00:02:19.095 Drivers Enabled 00:02:19.095 =============== 00:02:19.095 00:02:19.095 common: 00:02:19.095 00:02:19.095 bus: 00:02:19.095 pci, vdev, 00:02:19.095 mempool: 00:02:19.095 ring, 00:02:19.095 dma: 00:02:19.095 00:02:19.095 net: 00:02:19.095 00:02:19.095 crypto: 00:02:19.095 00:02:19.095 compress: 00:02:19.095 00:02:19.095 vdpa: 00:02:19.095 00:02:19.095 00:02:19.095 Message: 00:02:19.095 ================= 00:02:19.095 Content Skipped 00:02:19.095 ================= 00:02:19.095 00:02:19.095 apps: 00:02:19.095 dumpcap: explicitly disabled via build config 00:02:19.095 graph: explicitly disabled via build config 00:02:19.095 pdump: explicitly disabled via build config 00:02:19.095 proc-info: explicitly disabled via build config 00:02:19.095 test-acl: explicitly disabled via build config 00:02:19.095 test-bbdev: explicitly disabled via build config 00:02:19.095 test-cmdline: explicitly disabled via build config 00:02:19.095 test-compress-perf: explicitly disabled via build config 00:02:19.095 test-crypto-perf: explicitly disabled via build config 00:02:19.095 test-dma-perf: explicitly disabled via build config 00:02:19.095 test-eventdev: explicitly disabled via build config 00:02:19.095 test-fib: explicitly disabled via build config 00:02:19.095 test-flow-perf: explicitly disabled via build config 00:02:19.095 test-gpudev: explicitly disabled via build config 00:02:19.095 test-mldev: explicitly disabled via build config 00:02:19.095 test-pipeline: explicitly disabled via build config 00:02:19.095 test-pmd: explicitly disabled via build config 00:02:19.095 test-regex: explicitly disabled via build config 00:02:19.095 test-sad: explicitly disabled via build config 00:02:19.095 test-security-perf: explicitly disabled via build config 00:02:19.095 00:02:19.095 libs: 00:02:19.095 argparse: explicitly disabled via build config 00:02:19.095 metrics: explicitly disabled via build config 00:02:19.095 acl: explicitly disabled via build config 00:02:19.095 bbdev: explicitly disabled via build config 00:02:19.095 bitratestats: explicitly disabled via build config 00:02:19.095 bpf: explicitly disabled via build config 00:02:19.095 cfgfile: explicitly disabled via build config 00:02:19.095 distributor: explicitly disabled via build config 00:02:19.095 efd: explicitly disabled via build config 00:02:19.095 eventdev: explicitly disabled via build config 00:02:19.095 dispatcher: explicitly disabled via build config 00:02:19.095 gpudev: explicitly disabled via build config 00:02:19.095 gro: explicitly disabled via build config 00:02:19.095 gso: explicitly disabled via build config 00:02:19.095 ip_frag: explicitly disabled via build config 00:02:19.095 jobstats: explicitly disabled via build config 00:02:19.095 latencystats: explicitly disabled via build config 00:02:19.095 lpm: explicitly disabled via build config 00:02:19.095 member: explicitly disabled via build config 00:02:19.095 pcapng: explicitly disabled via build config 00:02:19.095 rawdev: explicitly disabled via build config 00:02:19.095 regexdev: explicitly disabled via build config 00:02:19.095 mldev: explicitly disabled via build config 00:02:19.095 rib: explicitly disabled via build config 00:02:19.095 sched: explicitly disabled via build config 00:02:19.095 stack: explicitly disabled via build config 00:02:19.095 ipsec: explicitly disabled via build config 00:02:19.095 pdcp: explicitly disabled via build config 00:02:19.095 fib: explicitly disabled via build config 00:02:19.095 port: explicitly disabled via build config 00:02:19.095 pdump: explicitly disabled via build config 00:02:19.095 table: explicitly disabled via build config 00:02:19.095 pipeline: explicitly disabled via build config 00:02:19.095 graph: explicitly disabled via build config 00:02:19.095 node: explicitly disabled via build config 00:02:19.095 00:02:19.095 drivers: 00:02:19.095 common/cpt: not in enabled drivers build config 00:02:19.095 common/dpaax: not in enabled drivers build config 00:02:19.095 common/iavf: not in enabled drivers build config 00:02:19.095 common/idpf: not in enabled drivers build config 00:02:19.095 common/ionic: not in enabled drivers build config 00:02:19.095 common/mvep: not in enabled drivers build config 00:02:19.095 common/octeontx: not in enabled drivers build config 00:02:19.095 bus/auxiliary: not in enabled drivers build config 00:02:19.095 bus/cdx: not in enabled drivers build config 00:02:19.095 bus/dpaa: not in enabled drivers build config 00:02:19.095 bus/fslmc: not in enabled drivers build config 00:02:19.095 bus/ifpga: not in enabled drivers build config 00:02:19.095 bus/platform: not in enabled drivers build config 00:02:19.095 bus/uacce: not in enabled drivers build config 00:02:19.095 bus/vmbus: not in enabled drivers build config 00:02:19.095 common/cnxk: not in enabled drivers build config 00:02:19.095 common/mlx5: not in enabled drivers build config 00:02:19.095 common/nfp: not in enabled drivers build config 00:02:19.095 common/nitrox: not in enabled drivers build config 00:02:19.095 common/qat: not in enabled drivers build config 00:02:19.095 common/sfc_efx: not in enabled drivers build config 00:02:19.095 mempool/bucket: not in enabled drivers build config 00:02:19.095 mempool/cnxk: not in enabled drivers build config 00:02:19.095 mempool/dpaa: not in enabled drivers build config 00:02:19.095 mempool/dpaa2: not in enabled drivers build config 00:02:19.095 mempool/octeontx: not in enabled drivers build config 00:02:19.095 mempool/stack: not in enabled drivers build config 00:02:19.095 dma/cnxk: not in enabled drivers build config 00:02:19.095 dma/dpaa: not in enabled drivers build config 00:02:19.095 dma/dpaa2: not in enabled drivers build config 00:02:19.095 dma/hisilicon: not in enabled drivers build config 00:02:19.095 dma/idxd: not in enabled drivers build config 00:02:19.095 dma/ioat: not in enabled drivers build config 00:02:19.095 dma/skeleton: not in enabled drivers build config 00:02:19.095 net/af_packet: not in enabled drivers build config 00:02:19.095 net/af_xdp: not in enabled drivers build config 00:02:19.095 net/ark: not in enabled drivers build config 00:02:19.095 net/atlantic: not in enabled drivers build config 00:02:19.095 net/avp: not in enabled drivers build config 00:02:19.095 net/axgbe: not in enabled drivers build config 00:02:19.095 net/bnx2x: not in enabled drivers build config 00:02:19.095 net/bnxt: not in enabled drivers build config 00:02:19.095 net/bonding: not in enabled drivers build config 00:02:19.095 net/cnxk: not in enabled drivers build config 00:02:19.095 net/cpfl: not in enabled drivers build config 00:02:19.095 net/cxgbe: not in enabled drivers build config 00:02:19.095 net/dpaa: not in enabled drivers build config 00:02:19.095 net/dpaa2: not in enabled drivers build config 00:02:19.095 net/e1000: not in enabled drivers build config 00:02:19.095 net/ena: not in enabled drivers build config 00:02:19.095 net/enetc: not in enabled drivers build config 00:02:19.095 net/enetfec: not in enabled drivers build config 00:02:19.095 net/enic: not in enabled drivers build config 00:02:19.095 net/failsafe: not in enabled drivers build config 00:02:19.095 net/fm10k: not in enabled drivers build config 00:02:19.095 net/gve: not in enabled drivers build config 00:02:19.095 net/hinic: not in enabled drivers build config 00:02:19.095 net/hns3: not in enabled drivers build config 00:02:19.095 net/i40e: not in enabled drivers build config 00:02:19.095 net/iavf: not in enabled drivers build config 00:02:19.095 net/ice: not in enabled drivers build config 00:02:19.095 net/idpf: not in enabled drivers build config 00:02:19.095 net/igc: not in enabled drivers build config 00:02:19.095 net/ionic: not in enabled drivers build config 00:02:19.095 net/ipn3ke: not in enabled drivers build config 00:02:19.095 net/ixgbe: not in enabled drivers build config 00:02:19.095 net/mana: not in enabled drivers build config 00:02:19.095 net/memif: not in enabled drivers build config 00:02:19.095 net/mlx4: not in enabled drivers build config 00:02:19.095 net/mlx5: not in enabled drivers build config 00:02:19.095 net/mvneta: not in enabled drivers build config 00:02:19.095 net/mvpp2: not in enabled drivers build config 00:02:19.095 net/netvsc: not in enabled drivers build config 00:02:19.095 net/nfb: not in enabled drivers build config 00:02:19.095 net/nfp: not in enabled drivers build config 00:02:19.095 net/ngbe: not in enabled drivers build config 00:02:19.095 net/null: not in enabled drivers build config 00:02:19.095 net/octeontx: not in enabled drivers build config 00:02:19.095 net/octeon_ep: not in enabled drivers build config 00:02:19.095 net/pcap: not in enabled drivers build config 00:02:19.095 net/pfe: not in enabled drivers build config 00:02:19.095 net/qede: not in enabled drivers build config 00:02:19.095 net/ring: not in enabled drivers build config 00:02:19.095 net/sfc: not in enabled drivers build config 00:02:19.095 net/softnic: not in enabled drivers build config 00:02:19.095 net/tap: not in enabled drivers build config 00:02:19.095 net/thunderx: not in enabled drivers build config 00:02:19.095 net/txgbe: not in enabled drivers build config 00:02:19.095 net/vdev_netvsc: not in enabled drivers build config 00:02:19.095 net/vhost: not in enabled drivers build config 00:02:19.096 net/virtio: not in enabled drivers build config 00:02:19.096 net/vmxnet3: not in enabled drivers build config 00:02:19.096 raw/*: missing internal dependency, "rawdev" 00:02:19.096 crypto/armv8: not in enabled drivers build config 00:02:19.096 crypto/bcmfs: not in enabled drivers build config 00:02:19.096 crypto/caam_jr: not in enabled drivers build config 00:02:19.096 crypto/ccp: not in enabled drivers build config 00:02:19.096 crypto/cnxk: not in enabled drivers build config 00:02:19.096 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.096 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.096 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.096 crypto/mlx5: not in enabled drivers build config 00:02:19.096 crypto/mvsam: not in enabled drivers build config 00:02:19.096 crypto/nitrox: not in enabled drivers build config 00:02:19.096 crypto/null: not in enabled drivers build config 00:02:19.096 crypto/octeontx: not in enabled drivers build config 00:02:19.096 crypto/openssl: not in enabled drivers build config 00:02:19.096 crypto/scheduler: not in enabled drivers build config 00:02:19.096 crypto/uadk: not in enabled drivers build config 00:02:19.096 crypto/virtio: not in enabled drivers build config 00:02:19.096 compress/isal: not in enabled drivers build config 00:02:19.096 compress/mlx5: not in enabled drivers build config 00:02:19.096 compress/nitrox: not in enabled drivers build config 00:02:19.096 compress/octeontx: not in enabled drivers build config 00:02:19.096 compress/zlib: not in enabled drivers build config 00:02:19.096 regex/*: missing internal dependency, "regexdev" 00:02:19.096 ml/*: missing internal dependency, "mldev" 00:02:19.096 vdpa/ifc: not in enabled drivers build config 00:02:19.096 vdpa/mlx5: not in enabled drivers build config 00:02:19.096 vdpa/nfp: not in enabled drivers build config 00:02:19.096 vdpa/sfc: not in enabled drivers build config 00:02:19.096 event/*: missing internal dependency, "eventdev" 00:02:19.096 baseband/*: missing internal dependency, "bbdev" 00:02:19.096 gpu/*: missing internal dependency, "gpudev" 00:02:19.096 00:02:19.096 00:02:19.352 Build targets in project: 85 00:02:19.352 00:02:19.352 DPDK 24.03.0 00:02:19.352 00:02:19.352 User defined options 00:02:19.352 buildtype : debug 00:02:19.352 default_library : shared 00:02:19.352 libdir : lib 00:02:19.352 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:19.352 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:19.352 c_link_args : 00:02:19.352 cpu_instruction_set: native 00:02:19.352 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:19.352 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:19.352 enable_docs : false 00:02:19.352 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:19.352 enable_kmods : false 00:02:19.352 max_lcores : 128 00:02:19.352 tests : false 00:02:19.352 00:02:19.352 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.615 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:19.615 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.615 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.873 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.873 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.873 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.873 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.873 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.873 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.873 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.873 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.873 [11/268] Linking static target lib/librte_kvargs.a 00:02:19.873 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.873 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:19.873 [14/268] Linking static target lib/librte_log.a 00:02:19.873 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.873 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.443 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.704 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.704 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.704 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:20.704 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.704 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.704 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.704 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.704 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.704 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.704 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.704 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.704 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.704 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.704 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.704 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.704 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.704 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.704 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.704 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.704 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.704 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.704 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.705 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.705 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.705 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.705 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.705 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.705 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:20.705 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.705 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.705 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.705 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.705 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.705 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.705 [52/268] Linking static target lib/librte_telemetry.a 00:02:20.705 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.705 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:20.705 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.705 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.705 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.705 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.964 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:20.964 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.964 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.964 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.964 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.964 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.964 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.964 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.229 [67/268] Linking target lib/librte_log.so.24.1 00:02:21.229 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.229 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.229 [70/268] Linking static target lib/librte_pci.a 00:02:21.229 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.489 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.489 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.489 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.489 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.489 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.489 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.489 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.489 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.489 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.489 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:21.489 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.489 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.750 [84/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.750 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.750 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.750 [87/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.750 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.750 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.750 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.750 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.751 [92/268] Linking target lib/librte_kvargs.so.24.1 00:02:21.751 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.751 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.751 [95/268] Linking static target lib/librte_ring.a 00:02:21.751 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.751 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.751 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.751 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.751 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.751 [101/268] Linking static target lib/librte_meter.a 00:02:21.751 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.751 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.751 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.751 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.751 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.751 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.751 [108/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.751 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.015 [110/268] Linking static target lib/librte_eal.a 00:02:22.015 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.015 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.015 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.015 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.015 [115/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.015 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.015 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.015 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.015 [119/268] Linking static target lib/librte_rcu.a 00:02:22.015 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.015 [121/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.015 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.015 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.015 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.015 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.015 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.015 [127/268] Linking static target lib/librte_mempool.a 00:02:22.015 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.277 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.277 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.277 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.277 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.277 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.277 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.277 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.277 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.277 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.277 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.277 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.277 [140/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.277 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.541 [142/268] Linking static target lib/librte_net.a 00:02:22.542 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.542 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.542 [145/268] Linking static target lib/librte_cmdline.a 00:02:22.542 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.542 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.542 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.800 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.800 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.800 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.800 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.800 [153/268] Linking static target lib/librte_timer.a 00:02:22.800 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.800 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.800 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.800 [157/268] Linking static target lib/librte_dmadev.a 00:02:22.800 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.800 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.800 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.800 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.059 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.059 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.059 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.059 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.059 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.059 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.059 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.059 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.059 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:23.059 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.059 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.059 [173/268] Linking static target lib/librte_power.a 00:02:23.059 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.317 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.317 [176/268] Linking static target lib/librte_compressdev.a 00:02:23.317 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.317 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.317 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.317 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.317 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.317 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.317 [183/268] Linking static target lib/librte_hash.a 00:02:23.317 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.317 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.317 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.317 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.317 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.317 [189/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.317 [190/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.317 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:23.317 [192/268] Linking static target lib/librte_reorder.a 00:02:23.575 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.575 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.575 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.575 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.575 [197/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.575 [198/268] Linking static target lib/librte_mbuf.a 00:02:23.575 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.575 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.575 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.575 [202/268] Linking static target drivers/librte_bus_pci.a 00:02:23.575 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.575 [204/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.575 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.575 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.575 [207/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.575 [208/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.575 [209/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.575 [210/268] Linking static target lib/librte_security.a 00:02:23.833 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.833 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.833 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.833 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.833 [215/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.833 [216/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.833 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.833 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.833 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.091 [220/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.091 [221/268] Linking static target lib/librte_cryptodev.a 00:02:24.091 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.091 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.091 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.091 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.091 [226/268] Linking static target lib/librte_ethdev.a 00:02:25.024 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.411 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.307 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.307 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.307 [231/268] Linking target lib/librte_eal.so.24.1 00:02:28.307 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.564 [233/268] Linking target lib/librte_timer.so.24.1 00:02:28.564 [234/268] Linking target lib/librte_ring.so.24.1 00:02:28.564 [235/268] Linking target lib/librte_meter.so.24.1 00:02:28.564 [236/268] Linking target lib/librte_pci.so.24.1 00:02:28.564 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:28.564 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.564 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:28.564 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:28.564 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:28.564 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:28.564 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:28.564 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:28.564 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:28.564 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:28.821 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:28.821 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:28.821 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:28.821 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:28.821 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.085 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:29.085 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:29.085 [254/268] Linking target lib/librte_net.so.24.1 00:02:29.085 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:29.085 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:29.085 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:29.085 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:29.085 [259/268] Linking target lib/librte_security.so.24.1 00:02:29.085 [260/268] Linking target lib/librte_hash.so.24.1 00:02:29.085 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:29.343 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:29.343 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.343 [264/268] Linking target lib/librte_power.so.24.1 00:02:31.878 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.878 [266/268] Linking static target lib/librte_vhost.a 00:02:32.811 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.811 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.811 INFO: autodetecting backend as ninja 00:02:32.811 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:33.742 CC lib/ut/ut.o 00:02:33.742 CC lib/ut_mock/mock.o 00:02:33.742 CC lib/log/log.o 00:02:33.742 CC lib/log/log_flags.o 00:02:33.742 CC lib/log/log_deprecated.o 00:02:33.742 LIB libspdk_log.a 00:02:33.742 LIB libspdk_ut.a 00:02:33.742 LIB libspdk_ut_mock.a 00:02:33.742 SO libspdk_ut.so.2.0 00:02:33.742 SO libspdk_ut_mock.so.6.0 00:02:33.742 SO libspdk_log.so.7.0 00:02:33.999 SYMLINK libspdk_ut_mock.so 00:02:33.999 SYMLINK libspdk_ut.so 00:02:33.999 SYMLINK libspdk_log.so 00:02:33.999 CC lib/util/base64.o 00:02:33.999 CXX lib/trace_parser/trace.o 00:02:33.999 CC lib/ioat/ioat.o 00:02:33.999 CC lib/dma/dma.o 00:02:33.999 CC lib/util/bit_array.o 00:02:33.999 CC lib/util/cpuset.o 00:02:33.999 CC lib/util/crc16.o 00:02:33.999 CC lib/util/crc32.o 00:02:33.999 CC lib/util/crc32c.o 00:02:33.999 CC lib/util/crc32_ieee.o 00:02:33.999 CC lib/util/crc64.o 00:02:33.999 CC lib/util/dif.o 00:02:33.999 CC lib/util/fd.o 00:02:33.999 CC lib/util/file.o 00:02:33.999 CC lib/util/hexlify.o 00:02:33.999 CC lib/util/iov.o 00:02:33.999 CC lib/util/math.o 00:02:33.999 CC lib/util/pipe.o 00:02:33.999 CC lib/util/strerror_tls.o 00:02:33.999 CC lib/util/string.o 00:02:33.999 CC lib/util/uuid.o 00:02:33.999 CC lib/util/fd_group.o 00:02:33.999 CC lib/util/xor.o 00:02:33.999 CC lib/util/zipf.o 00:02:34.256 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.256 CC lib/vfio_user/host/vfio_user.o 00:02:34.256 LIB libspdk_dma.a 00:02:34.256 SO libspdk_dma.so.4.0 00:02:34.512 SYMLINK libspdk_dma.so 00:02:34.512 LIB libspdk_ioat.a 00:02:34.512 SO libspdk_ioat.so.7.0 00:02:34.512 LIB libspdk_vfio_user.a 00:02:34.512 SYMLINK libspdk_ioat.so 00:02:34.512 SO libspdk_vfio_user.so.5.0 00:02:34.512 SYMLINK libspdk_vfio_user.so 00:02:34.512 LIB libspdk_util.a 00:02:34.769 SO libspdk_util.so.9.1 00:02:34.769 SYMLINK libspdk_util.so 00:02:35.029 CC lib/conf/conf.o 00:02:35.029 CC lib/rdma_provider/common.o 00:02:35.029 CC lib/vmd/vmd.o 00:02:35.029 CC lib/rdma_utils/rdma_utils.o 00:02:35.029 CC lib/idxd/idxd.o 00:02:35.029 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.029 CC lib/json/json_parse.o 00:02:35.029 CC lib/vmd/led.o 00:02:35.029 CC lib/json/json_util.o 00:02:35.029 CC lib/env_dpdk/env.o 00:02:35.029 CC lib/idxd/idxd_user.o 00:02:35.029 CC lib/json/json_write.o 00:02:35.029 CC lib/env_dpdk/memory.o 00:02:35.029 CC lib/idxd/idxd_kernel.o 00:02:35.029 CC lib/env_dpdk/pci.o 00:02:35.029 CC lib/env_dpdk/init.o 00:02:35.029 CC lib/env_dpdk/threads.o 00:02:35.029 CC lib/env_dpdk/pci_ioat.o 00:02:35.029 CC lib/env_dpdk/pci_virtio.o 00:02:35.029 CC lib/env_dpdk/pci_vmd.o 00:02:35.029 CC lib/env_dpdk/pci_idxd.o 00:02:35.029 CC lib/env_dpdk/pci_event.o 00:02:35.029 CC lib/env_dpdk/sigbus_handler.o 00:02:35.029 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.029 CC lib/env_dpdk/pci_dpdk.o 00:02:35.029 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.029 LIB libspdk_trace_parser.a 00:02:35.029 SO libspdk_trace_parser.so.5.0 00:02:35.320 SYMLINK libspdk_trace_parser.so 00:02:35.320 LIB libspdk_rdma_provider.a 00:02:35.320 SO libspdk_rdma_provider.so.6.0 00:02:35.320 SYMLINK libspdk_rdma_provider.so 00:02:35.320 LIB libspdk_rdma_utils.a 00:02:35.320 SO libspdk_rdma_utils.so.1.0 00:02:35.320 LIB libspdk_conf.a 00:02:35.320 LIB libspdk_json.a 00:02:35.320 SO libspdk_conf.so.6.0 00:02:35.320 SO libspdk_json.so.6.0 00:02:35.320 SYMLINK libspdk_rdma_utils.so 00:02:35.320 SYMLINK libspdk_conf.so 00:02:35.600 SYMLINK libspdk_json.so 00:02:35.600 LIB libspdk_idxd.a 00:02:35.600 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.600 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.600 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.600 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:35.600 SO libspdk_idxd.so.12.0 00:02:35.600 SYMLINK libspdk_idxd.so 00:02:35.600 LIB libspdk_vmd.a 00:02:35.600 SO libspdk_vmd.so.6.0 00:02:35.858 SYMLINK libspdk_vmd.so 00:02:35.858 LIB libspdk_jsonrpc.a 00:02:35.858 SO libspdk_jsonrpc.so.6.0 00:02:35.858 SYMLINK libspdk_jsonrpc.so 00:02:36.115 CC lib/rpc/rpc.o 00:02:36.374 LIB libspdk_rpc.a 00:02:36.374 SO libspdk_rpc.so.6.0 00:02:36.374 SYMLINK libspdk_rpc.so 00:02:36.632 CC lib/keyring/keyring.o 00:02:36.632 CC lib/notify/notify.o 00:02:36.632 CC lib/keyring/keyring_rpc.o 00:02:36.632 CC lib/notify/notify_rpc.o 00:02:36.632 CC lib/trace/trace.o 00:02:36.632 CC lib/trace/trace_flags.o 00:02:36.632 CC lib/trace/trace_rpc.o 00:02:36.632 LIB libspdk_notify.a 00:02:36.890 SO libspdk_notify.so.6.0 00:02:36.890 LIB libspdk_keyring.a 00:02:36.890 SYMLINK libspdk_notify.so 00:02:36.890 LIB libspdk_trace.a 00:02:36.890 SO libspdk_keyring.so.1.0 00:02:36.890 SO libspdk_trace.so.10.0 00:02:36.890 SYMLINK libspdk_keyring.so 00:02:36.890 SYMLINK libspdk_trace.so 00:02:37.147 LIB libspdk_env_dpdk.a 00:02:37.147 CC lib/sock/sock.o 00:02:37.147 CC lib/sock/sock_rpc.o 00:02:37.147 CC lib/thread/thread.o 00:02:37.147 CC lib/thread/iobuf.o 00:02:37.147 SO libspdk_env_dpdk.so.14.1 00:02:37.405 SYMLINK libspdk_env_dpdk.so 00:02:37.405 LIB libspdk_sock.a 00:02:37.405 SO libspdk_sock.so.10.0 00:02:37.663 SYMLINK libspdk_sock.so 00:02:37.663 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.663 CC lib/nvme/nvme_ctrlr.o 00:02:37.663 CC lib/nvme/nvme_fabric.o 00:02:37.663 CC lib/nvme/nvme_ns_cmd.o 00:02:37.663 CC lib/nvme/nvme_ns.o 00:02:37.663 CC lib/nvme/nvme_pcie_common.o 00:02:37.663 CC lib/nvme/nvme_pcie.o 00:02:37.663 CC lib/nvme/nvme_qpair.o 00:02:37.663 CC lib/nvme/nvme.o 00:02:37.663 CC lib/nvme/nvme_quirks.o 00:02:37.663 CC lib/nvme/nvme_transport.o 00:02:37.663 CC lib/nvme/nvme_discovery.o 00:02:37.663 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:37.663 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:37.663 CC lib/nvme/nvme_tcp.o 00:02:37.663 CC lib/nvme/nvme_opal.o 00:02:37.663 CC lib/nvme/nvme_io_msg.o 00:02:37.663 CC lib/nvme/nvme_poll_group.o 00:02:37.663 CC lib/nvme/nvme_zns.o 00:02:37.663 CC lib/nvme/nvme_stubs.o 00:02:37.663 CC lib/nvme/nvme_auth.o 00:02:37.663 CC lib/nvme/nvme_cuse.o 00:02:37.663 CC lib/nvme/nvme_vfio_user.o 00:02:37.663 CC lib/nvme/nvme_rdma.o 00:02:38.595 LIB libspdk_thread.a 00:02:38.853 SO libspdk_thread.so.10.1 00:02:38.853 SYMLINK libspdk_thread.so 00:02:38.853 CC lib/vfu_tgt/tgt_endpoint.o 00:02:38.853 CC lib/blob/blobstore.o 00:02:38.853 CC lib/init/json_config.o 00:02:38.853 CC lib/blob/request.o 00:02:38.853 CC lib/vfu_tgt/tgt_rpc.o 00:02:38.853 CC lib/virtio/virtio.o 00:02:38.853 CC lib/accel/accel.o 00:02:38.853 CC lib/init/subsystem.o 00:02:38.853 CC lib/blob/zeroes.o 00:02:38.853 CC lib/accel/accel_rpc.o 00:02:38.853 CC lib/virtio/virtio_vhost_user.o 00:02:38.853 CC lib/blob/blob_bs_dev.o 00:02:38.853 CC lib/init/subsystem_rpc.o 00:02:38.853 CC lib/virtio/virtio_vfio_user.o 00:02:38.853 CC lib/accel/accel_sw.o 00:02:38.853 CC lib/init/rpc.o 00:02:38.853 CC lib/virtio/virtio_pci.o 00:02:39.110 LIB libspdk_init.a 00:02:39.367 SO libspdk_init.so.5.0 00:02:39.367 LIB libspdk_virtio.a 00:02:39.367 LIB libspdk_vfu_tgt.a 00:02:39.367 SYMLINK libspdk_init.so 00:02:39.367 SO libspdk_vfu_tgt.so.3.0 00:02:39.367 SO libspdk_virtio.so.7.0 00:02:39.367 SYMLINK libspdk_vfu_tgt.so 00:02:39.367 SYMLINK libspdk_virtio.so 00:02:39.367 CC lib/event/app.o 00:02:39.367 CC lib/event/reactor.o 00:02:39.367 CC lib/event/log_rpc.o 00:02:39.367 CC lib/event/app_rpc.o 00:02:39.367 CC lib/event/scheduler_static.o 00:02:39.932 LIB libspdk_event.a 00:02:39.932 SO libspdk_event.so.14.0 00:02:39.932 LIB libspdk_accel.a 00:02:39.932 SYMLINK libspdk_event.so 00:02:39.932 SO libspdk_accel.so.15.1 00:02:40.190 SYMLINK libspdk_accel.so 00:02:40.190 LIB libspdk_nvme.a 00:02:40.190 CC lib/bdev/bdev.o 00:02:40.190 CC lib/bdev/bdev_rpc.o 00:02:40.190 CC lib/bdev/bdev_zone.o 00:02:40.190 SO libspdk_nvme.so.13.1 00:02:40.190 CC lib/bdev/part.o 00:02:40.190 CC lib/bdev/scsi_nvme.o 00:02:40.448 SYMLINK libspdk_nvme.so 00:02:41.818 LIB libspdk_blob.a 00:02:41.818 SO libspdk_blob.so.11.0 00:02:42.075 SYMLINK libspdk_blob.so 00:02:42.075 CC lib/blobfs/blobfs.o 00:02:42.075 CC lib/blobfs/tree.o 00:02:42.075 CC lib/lvol/lvol.o 00:02:43.011 LIB libspdk_bdev.a 00:02:43.011 SO libspdk_bdev.so.15.1 00:02:43.011 SYMLINK libspdk_bdev.so 00:02:43.011 LIB libspdk_blobfs.a 00:02:43.011 SO libspdk_blobfs.so.10.0 00:02:43.011 SYMLINK libspdk_blobfs.so 00:02:43.011 CC lib/scsi/dev.o 00:02:43.011 CC lib/nbd/nbd.o 00:02:43.011 CC lib/ublk/ublk.o 00:02:43.011 CC lib/scsi/lun.o 00:02:43.011 CC lib/nvmf/ctrlr.o 00:02:43.011 CC lib/nbd/nbd_rpc.o 00:02:43.011 CC lib/ublk/ublk_rpc.o 00:02:43.011 CC lib/ftl/ftl_core.o 00:02:43.011 CC lib/scsi/port.o 00:02:43.011 CC lib/nvmf/ctrlr_discovery.o 00:02:43.011 CC lib/ftl/ftl_init.o 00:02:43.011 CC lib/nvmf/ctrlr_bdev.o 00:02:43.011 CC lib/ftl/ftl_layout.o 00:02:43.011 CC lib/scsi/scsi.o 00:02:43.011 CC lib/nvmf/subsystem.o 00:02:43.011 CC lib/nvmf/nvmf.o 00:02:43.012 CC lib/ftl/ftl_debug.o 00:02:43.012 CC lib/nvmf/nvmf_rpc.o 00:02:43.012 CC lib/scsi/scsi_bdev.o 00:02:43.012 CC lib/ftl/ftl_io.o 00:02:43.012 CC lib/scsi/scsi_pr.o 00:02:43.012 CC lib/scsi/scsi_rpc.o 00:02:43.012 CC lib/nvmf/transport.o 00:02:43.012 CC lib/ftl/ftl_sb.o 00:02:43.012 CC lib/scsi/task.o 00:02:43.012 CC lib/nvmf/tcp.o 00:02:43.012 CC lib/ftl/ftl_l2p.o 00:02:43.012 CC lib/nvmf/stubs.o 00:02:43.012 CC lib/ftl/ftl_l2p_flat.o 00:02:43.012 CC lib/ftl/ftl_nv_cache.o 00:02:43.012 CC lib/nvmf/mdns_server.o 00:02:43.012 CC lib/ftl/ftl_band.o 00:02:43.012 CC lib/nvmf/vfio_user.o 00:02:43.012 LIB libspdk_lvol.a 00:02:43.012 CC lib/ftl/ftl_band_ops.o 00:02:43.012 CC lib/nvmf/auth.o 00:02:43.012 CC lib/nvmf/rdma.o 00:02:43.012 CC lib/ftl/ftl_writer.o 00:02:43.012 CC lib/ftl/ftl_rq.o 00:02:43.012 CC lib/ftl/ftl_reloc.o 00:02:43.012 CC lib/ftl/ftl_l2p_cache.o 00:02:43.012 CC lib/ftl/ftl_p2l.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.012 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.274 SO libspdk_lvol.so.10.0 00:02:43.274 SYMLINK libspdk_lvol.so 00:02:43.274 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.542 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:43.542 CC lib/ftl/utils/ftl_conf.o 00:02:43.542 CC lib/ftl/utils/ftl_md.o 00:02:43.542 CC lib/ftl/utils/ftl_mempool.o 00:02:43.542 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.542 CC lib/ftl/utils/ftl_property.o 00:02:43.542 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.542 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:43.542 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.542 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.542 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.542 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.542 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.799 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.799 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.799 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.799 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.799 CC lib/ftl/base/ftl_base_dev.o 00:02:43.799 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.799 CC lib/ftl/ftl_trace.o 00:02:43.799 LIB libspdk_nbd.a 00:02:44.056 SO libspdk_nbd.so.7.0 00:02:44.056 LIB libspdk_scsi.a 00:02:44.056 SYMLINK libspdk_nbd.so 00:02:44.056 SO libspdk_scsi.so.9.0 00:02:44.056 LIB libspdk_ublk.a 00:02:44.056 SO libspdk_ublk.so.3.0 00:02:44.056 SYMLINK libspdk_scsi.so 00:02:44.314 SYMLINK libspdk_ublk.so 00:02:44.314 CC lib/vhost/vhost.o 00:02:44.314 CC lib/iscsi/conn.o 00:02:44.314 CC lib/vhost/vhost_rpc.o 00:02:44.314 CC lib/iscsi/init_grp.o 00:02:44.314 CC lib/vhost/vhost_scsi.o 00:02:44.314 CC lib/vhost/vhost_blk.o 00:02:44.314 CC lib/iscsi/iscsi.o 00:02:44.314 CC lib/vhost/rte_vhost_user.o 00:02:44.314 CC lib/iscsi/md5.o 00:02:44.314 CC lib/iscsi/param.o 00:02:44.314 CC lib/iscsi/portal_grp.o 00:02:44.314 CC lib/iscsi/tgt_node.o 00:02:44.314 CC lib/iscsi/iscsi_subsystem.o 00:02:44.314 CC lib/iscsi/iscsi_rpc.o 00:02:44.314 CC lib/iscsi/task.o 00:02:44.571 LIB libspdk_ftl.a 00:02:44.828 SO libspdk_ftl.so.9.0 00:02:45.094 SYMLINK libspdk_ftl.so 00:02:45.658 LIB libspdk_vhost.a 00:02:45.658 SO libspdk_vhost.so.8.0 00:02:45.658 LIB libspdk_nvmf.a 00:02:45.658 SYMLINK libspdk_vhost.so 00:02:45.658 SO libspdk_nvmf.so.18.1 00:02:45.917 LIB libspdk_iscsi.a 00:02:45.917 SO libspdk_iscsi.so.8.0 00:02:45.917 SYMLINK libspdk_nvmf.so 00:02:45.917 SYMLINK libspdk_iscsi.so 00:02:46.177 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.177 CC module/vfu_device/vfu_virtio.o 00:02:46.177 CC module/vfu_device/vfu_virtio_blk.o 00:02:46.177 CC module/vfu_device/vfu_virtio_scsi.o 00:02:46.177 CC module/vfu_device/vfu_virtio_rpc.o 00:02:46.434 CC module/accel/ioat/accel_ioat.o 00:02:46.434 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.434 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.434 CC module/keyring/linux/keyring.o 00:02:46.434 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:46.434 CC module/accel/dsa/accel_dsa.o 00:02:46.434 CC module/keyring/linux/keyring_rpc.o 00:02:46.434 CC module/accel/iaa/accel_iaa.o 00:02:46.434 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.434 CC module/accel/iaa/accel_iaa_rpc.o 00:02:46.434 CC module/accel/error/accel_error.o 00:02:46.434 CC module/keyring/file/keyring.o 00:02:46.434 CC module/keyring/file/keyring_rpc.o 00:02:46.434 CC module/accel/error/accel_error_rpc.o 00:02:46.434 CC module/blob/bdev/blob_bdev.o 00:02:46.434 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.434 CC module/sock/posix/posix.o 00:02:46.434 LIB libspdk_env_dpdk_rpc.a 00:02:46.434 SO libspdk_env_dpdk_rpc.so.6.0 00:02:46.434 SYMLINK libspdk_env_dpdk_rpc.so 00:02:46.434 LIB libspdk_keyring_linux.a 00:02:46.434 LIB libspdk_keyring_file.a 00:02:46.434 LIB libspdk_scheduler_gscheduler.a 00:02:46.434 LIB libspdk_scheduler_dpdk_governor.a 00:02:46.435 SO libspdk_keyring_linux.so.1.0 00:02:46.435 SO libspdk_keyring_file.so.1.0 00:02:46.435 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:46.435 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.692 LIB libspdk_accel_error.a 00:02:46.692 LIB libspdk_accel_ioat.a 00:02:46.692 LIB libspdk_scheduler_dynamic.a 00:02:46.692 LIB libspdk_accel_iaa.a 00:02:46.692 SO libspdk_accel_error.so.2.0 00:02:46.692 SO libspdk_accel_ioat.so.6.0 00:02:46.692 SYMLINK libspdk_keyring_linux.so 00:02:46.692 SO libspdk_scheduler_dynamic.so.4.0 00:02:46.692 SYMLINK libspdk_keyring_file.so 00:02:46.692 SYMLINK libspdk_scheduler_gscheduler.so 00:02:46.692 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:46.692 SO libspdk_accel_iaa.so.3.0 00:02:46.692 LIB libspdk_accel_dsa.a 00:02:46.692 SYMLINK libspdk_accel_error.so 00:02:46.693 LIB libspdk_blob_bdev.a 00:02:46.693 SYMLINK libspdk_scheduler_dynamic.so 00:02:46.693 SYMLINK libspdk_accel_ioat.so 00:02:46.693 SO libspdk_accel_dsa.so.5.0 00:02:46.693 SYMLINK libspdk_accel_iaa.so 00:02:46.693 SO libspdk_blob_bdev.so.11.0 00:02:46.693 SYMLINK libspdk_accel_dsa.so 00:02:46.693 SYMLINK libspdk_blob_bdev.so 00:02:46.950 LIB libspdk_vfu_device.a 00:02:46.950 SO libspdk_vfu_device.so.3.0 00:02:46.950 CC module/bdev/malloc/bdev_malloc.o 00:02:46.950 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.950 CC module/bdev/null/bdev_null.o 00:02:46.950 CC module/bdev/raid/bdev_raid.o 00:02:46.950 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.950 CC module/bdev/nvme/bdev_nvme.o 00:02:46.950 CC module/bdev/gpt/gpt.o 00:02:46.950 CC module/bdev/null/bdev_null_rpc.o 00:02:46.950 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.950 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.950 CC module/bdev/delay/vbdev_delay.o 00:02:46.950 CC module/bdev/error/vbdev_error.o 00:02:46.950 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.950 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.950 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.950 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.950 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.950 CC module/bdev/aio/bdev_aio.o 00:02:46.950 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.950 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.950 CC module/bdev/split/vbdev_split.o 00:02:46.950 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.950 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.950 CC module/bdev/nvme/nvme_rpc.o 00:02:46.950 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.950 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.950 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.950 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.950 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.950 CC module/bdev/ftl/bdev_ftl.o 00:02:46.950 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.950 CC module/bdev/nvme/vbdev_opal.o 00:02:46.950 CC module/bdev/raid/raid0.o 00:02:46.950 CC module/bdev/raid/raid1.o 00:02:46.950 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.950 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.950 CC module/bdev/raid/concat.o 00:02:46.950 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.950 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.950 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.950 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.950 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:47.208 SYMLINK libspdk_vfu_device.so 00:02:47.208 LIB libspdk_sock_posix.a 00:02:47.208 SO libspdk_sock_posix.so.6.0 00:02:47.466 LIB libspdk_blobfs_bdev.a 00:02:47.466 LIB libspdk_bdev_null.a 00:02:47.466 SO libspdk_blobfs_bdev.so.6.0 00:02:47.466 SO libspdk_bdev_null.so.6.0 00:02:47.466 LIB libspdk_bdev_aio.a 00:02:47.466 SYMLINK libspdk_sock_posix.so 00:02:47.466 LIB libspdk_bdev_split.a 00:02:47.466 LIB libspdk_bdev_error.a 00:02:47.466 SYMLINK libspdk_blobfs_bdev.so 00:02:47.466 LIB libspdk_bdev_passthru.a 00:02:47.466 SYMLINK libspdk_bdev_null.so 00:02:47.466 SO libspdk_bdev_aio.so.6.0 00:02:47.466 SO libspdk_bdev_error.so.6.0 00:02:47.466 SO libspdk_bdev_split.so.6.0 00:02:47.466 LIB libspdk_bdev_gpt.a 00:02:47.466 SO libspdk_bdev_passthru.so.6.0 00:02:47.466 SO libspdk_bdev_gpt.so.6.0 00:02:47.467 SYMLINK libspdk_bdev_aio.so 00:02:47.467 SYMLINK libspdk_bdev_error.so 00:02:47.467 SYMLINK libspdk_bdev_split.so 00:02:47.467 LIB libspdk_bdev_ftl.a 00:02:47.467 SYMLINK libspdk_bdev_passthru.so 00:02:47.467 LIB libspdk_bdev_delay.a 00:02:47.467 LIB libspdk_bdev_iscsi.a 00:02:47.467 SO libspdk_bdev_ftl.so.6.0 00:02:47.467 LIB libspdk_bdev_zone_block.a 00:02:47.467 SYMLINK libspdk_bdev_gpt.so 00:02:47.467 SO libspdk_bdev_delay.so.6.0 00:02:47.724 SO libspdk_bdev_iscsi.so.6.0 00:02:47.724 LIB libspdk_bdev_malloc.a 00:02:47.724 SO libspdk_bdev_zone_block.so.6.0 00:02:47.724 SO libspdk_bdev_malloc.so.6.0 00:02:47.724 SYMLINK libspdk_bdev_ftl.so 00:02:47.724 SYMLINK libspdk_bdev_delay.so 00:02:47.724 SYMLINK libspdk_bdev_iscsi.so 00:02:47.724 SYMLINK libspdk_bdev_zone_block.so 00:02:47.724 LIB libspdk_bdev_lvol.a 00:02:47.724 SYMLINK libspdk_bdev_malloc.so 00:02:47.724 SO libspdk_bdev_lvol.so.6.0 00:02:47.724 LIB libspdk_bdev_virtio.a 00:02:47.724 SYMLINK libspdk_bdev_lvol.so 00:02:47.724 SO libspdk_bdev_virtio.so.6.0 00:02:47.724 SYMLINK libspdk_bdev_virtio.so 00:02:48.288 LIB libspdk_bdev_raid.a 00:02:48.288 SO libspdk_bdev_raid.so.6.0 00:02:48.288 SYMLINK libspdk_bdev_raid.so 00:02:49.223 LIB libspdk_bdev_nvme.a 00:02:49.482 SO libspdk_bdev_nvme.so.7.0 00:02:49.482 SYMLINK libspdk_bdev_nvme.so 00:02:49.741 CC module/event/subsystems/iobuf/iobuf.o 00:02:49.741 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:49.741 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:49.741 CC module/event/subsystems/scheduler/scheduler.o 00:02:49.741 CC module/event/subsystems/keyring/keyring.o 00:02:49.741 CC module/event/subsystems/sock/sock.o 00:02:49.741 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:49.741 CC module/event/subsystems/vmd/vmd.o 00:02:49.741 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:49.998 LIB libspdk_event_keyring.a 00:02:49.998 LIB libspdk_event_vfu_tgt.a 00:02:49.998 LIB libspdk_event_vhost_blk.a 00:02:49.998 LIB libspdk_event_scheduler.a 00:02:49.998 LIB libspdk_event_vmd.a 00:02:49.998 LIB libspdk_event_sock.a 00:02:49.998 SO libspdk_event_keyring.so.1.0 00:02:49.998 LIB libspdk_event_iobuf.a 00:02:49.998 SO libspdk_event_vfu_tgt.so.3.0 00:02:49.998 SO libspdk_event_vhost_blk.so.3.0 00:02:49.998 SO libspdk_event_scheduler.so.4.0 00:02:49.998 SO libspdk_event_sock.so.5.0 00:02:49.998 SO libspdk_event_vmd.so.6.0 00:02:49.998 SO libspdk_event_iobuf.so.3.0 00:02:49.998 SYMLINK libspdk_event_keyring.so 00:02:49.998 SYMLINK libspdk_event_vhost_blk.so 00:02:49.998 SYMLINK libspdk_event_vfu_tgt.so 00:02:49.998 SYMLINK libspdk_event_scheduler.so 00:02:49.998 SYMLINK libspdk_event_sock.so 00:02:49.998 SYMLINK libspdk_event_vmd.so 00:02:49.998 SYMLINK libspdk_event_iobuf.so 00:02:50.257 CC module/event/subsystems/accel/accel.o 00:02:50.516 LIB libspdk_event_accel.a 00:02:50.516 SO libspdk_event_accel.so.6.0 00:02:50.516 SYMLINK libspdk_event_accel.so 00:02:50.774 CC module/event/subsystems/bdev/bdev.o 00:02:50.774 LIB libspdk_event_bdev.a 00:02:50.774 SO libspdk_event_bdev.so.6.0 00:02:50.774 SYMLINK libspdk_event_bdev.so 00:02:51.032 CC module/event/subsystems/ublk/ublk.o 00:02:51.032 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.032 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.032 CC module/event/subsystems/scsi/scsi.o 00:02:51.032 CC module/event/subsystems/nbd/nbd.o 00:02:51.290 LIB libspdk_event_ublk.a 00:02:51.290 LIB libspdk_event_nbd.a 00:02:51.290 LIB libspdk_event_scsi.a 00:02:51.290 SO libspdk_event_ublk.so.3.0 00:02:51.290 SO libspdk_event_nbd.so.6.0 00:02:51.290 SO libspdk_event_scsi.so.6.0 00:02:51.290 SYMLINK libspdk_event_nbd.so 00:02:51.290 SYMLINK libspdk_event_ublk.so 00:02:51.290 SYMLINK libspdk_event_scsi.so 00:02:51.290 LIB libspdk_event_nvmf.a 00:02:51.290 SO libspdk_event_nvmf.so.6.0 00:02:51.290 SYMLINK libspdk_event_nvmf.so 00:02:51.548 CC module/event/subsystems/iscsi/iscsi.o 00:02:51.548 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:51.548 LIB libspdk_event_vhost_scsi.a 00:02:51.548 LIB libspdk_event_iscsi.a 00:02:51.548 SO libspdk_event_vhost_scsi.so.3.0 00:02:51.548 SO libspdk_event_iscsi.so.6.0 00:02:51.806 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.806 SYMLINK libspdk_event_iscsi.so 00:02:51.806 SO libspdk.so.6.0 00:02:51.806 SYMLINK libspdk.so 00:02:52.074 CC app/trace_record/trace_record.o 00:02:52.074 CXX app/trace/trace.o 00:02:52.074 TEST_HEADER include/spdk/accel.h 00:02:52.074 CC app/spdk_lspci/spdk_lspci.o 00:02:52.074 TEST_HEADER include/spdk/accel_module.h 00:02:52.074 TEST_HEADER include/spdk/assert.h 00:02:52.074 CC app/spdk_nvme_perf/perf.o 00:02:52.074 TEST_HEADER include/spdk/base64.h 00:02:52.074 TEST_HEADER include/spdk/barrier.h 00:02:52.074 TEST_HEADER include/spdk/bdev.h 00:02:52.074 TEST_HEADER include/spdk/bdev_module.h 00:02:52.074 TEST_HEADER include/spdk/bdev_zone.h 00:02:52.074 TEST_HEADER include/spdk/bit_array.h 00:02:52.074 CC test/rpc_client/rpc_client_test.o 00:02:52.074 CC app/spdk_nvme_identify/identify.o 00:02:52.074 TEST_HEADER include/spdk/bit_pool.h 00:02:52.074 CC app/spdk_top/spdk_top.o 00:02:52.074 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.074 TEST_HEADER include/spdk/blob_bdev.h 00:02:52.074 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:52.074 TEST_HEADER include/spdk/blobfs.h 00:02:52.074 TEST_HEADER include/spdk/blob.h 00:02:52.074 TEST_HEADER include/spdk/conf.h 00:02:52.074 TEST_HEADER include/spdk/config.h 00:02:52.074 TEST_HEADER include/spdk/cpuset.h 00:02:52.074 TEST_HEADER include/spdk/crc16.h 00:02:52.074 TEST_HEADER include/spdk/crc32.h 00:02:52.074 TEST_HEADER include/spdk/dif.h 00:02:52.074 TEST_HEADER include/spdk/crc64.h 00:02:52.074 TEST_HEADER include/spdk/dma.h 00:02:52.074 TEST_HEADER include/spdk/endian.h 00:02:52.074 TEST_HEADER include/spdk/env_dpdk.h 00:02:52.074 TEST_HEADER include/spdk/env.h 00:02:52.074 TEST_HEADER include/spdk/event.h 00:02:52.074 TEST_HEADER include/spdk/fd.h 00:02:52.074 TEST_HEADER include/spdk/fd_group.h 00:02:52.074 TEST_HEADER include/spdk/file.h 00:02:52.074 TEST_HEADER include/spdk/ftl.h 00:02:52.074 TEST_HEADER include/spdk/gpt_spec.h 00:02:52.074 TEST_HEADER include/spdk/hexlify.h 00:02:52.074 TEST_HEADER include/spdk/histogram_data.h 00:02:52.074 TEST_HEADER include/spdk/idxd.h 00:02:52.074 TEST_HEADER include/spdk/idxd_spec.h 00:02:52.074 TEST_HEADER include/spdk/init.h 00:02:52.074 TEST_HEADER include/spdk/ioat.h 00:02:52.074 TEST_HEADER include/spdk/iscsi_spec.h 00:02:52.074 TEST_HEADER include/spdk/ioat_spec.h 00:02:52.074 TEST_HEADER include/spdk/json.h 00:02:52.074 TEST_HEADER include/spdk/jsonrpc.h 00:02:52.074 TEST_HEADER include/spdk/keyring.h 00:02:52.074 TEST_HEADER include/spdk/likely.h 00:02:52.074 TEST_HEADER include/spdk/keyring_module.h 00:02:52.074 TEST_HEADER include/spdk/log.h 00:02:52.074 TEST_HEADER include/spdk/lvol.h 00:02:52.074 TEST_HEADER include/spdk/memory.h 00:02:52.074 TEST_HEADER include/spdk/mmio.h 00:02:52.074 TEST_HEADER include/spdk/nbd.h 00:02:52.074 TEST_HEADER include/spdk/notify.h 00:02:52.074 TEST_HEADER include/spdk/nvme.h 00:02:52.074 TEST_HEADER include/spdk/nvme_intel.h 00:02:52.074 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:52.074 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:52.074 TEST_HEADER include/spdk/nvme_spec.h 00:02:52.074 TEST_HEADER include/spdk/nvme_zns.h 00:02:52.074 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:52.074 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:52.074 TEST_HEADER include/spdk/nvmf.h 00:02:52.074 TEST_HEADER include/spdk/nvmf_spec.h 00:02:52.074 TEST_HEADER include/spdk/nvmf_transport.h 00:02:52.074 TEST_HEADER include/spdk/opal.h 00:02:52.074 TEST_HEADER include/spdk/pci_ids.h 00:02:52.074 TEST_HEADER include/spdk/opal_spec.h 00:02:52.074 TEST_HEADER include/spdk/pipe.h 00:02:52.074 TEST_HEADER include/spdk/queue.h 00:02:52.074 TEST_HEADER include/spdk/rpc.h 00:02:52.074 TEST_HEADER include/spdk/reduce.h 00:02:52.074 TEST_HEADER include/spdk/scheduler.h 00:02:52.074 TEST_HEADER include/spdk/scsi_spec.h 00:02:52.074 TEST_HEADER include/spdk/scsi.h 00:02:52.074 TEST_HEADER include/spdk/sock.h 00:02:52.074 TEST_HEADER include/spdk/string.h 00:02:52.074 TEST_HEADER include/spdk/stdinc.h 00:02:52.074 TEST_HEADER include/spdk/thread.h 00:02:52.074 TEST_HEADER include/spdk/trace.h 00:02:52.074 TEST_HEADER include/spdk/trace_parser.h 00:02:52.074 TEST_HEADER include/spdk/tree.h 00:02:52.074 TEST_HEADER include/spdk/util.h 00:02:52.074 TEST_HEADER include/spdk/ublk.h 00:02:52.074 TEST_HEADER include/spdk/uuid.h 00:02:52.074 TEST_HEADER include/spdk/version.h 00:02:52.074 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:52.074 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:52.074 TEST_HEADER include/spdk/vhost.h 00:02:52.074 TEST_HEADER include/spdk/vmd.h 00:02:52.074 TEST_HEADER include/spdk/xor.h 00:02:52.074 TEST_HEADER include/spdk/zipf.h 00:02:52.075 CXX test/cpp_headers/accel.o 00:02:52.075 CXX test/cpp_headers/accel_module.o 00:02:52.075 CXX test/cpp_headers/assert.o 00:02:52.075 CXX test/cpp_headers/barrier.o 00:02:52.075 CXX test/cpp_headers/base64.o 00:02:52.075 CXX test/cpp_headers/bdev.o 00:02:52.075 CXX test/cpp_headers/bdev_module.o 00:02:52.075 CXX test/cpp_headers/bdev_zone.o 00:02:52.075 CXX test/cpp_headers/bit_array.o 00:02:52.075 CXX test/cpp_headers/bit_pool.o 00:02:52.075 CXX test/cpp_headers/blob_bdev.o 00:02:52.075 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.075 CXX test/cpp_headers/blobfs.o 00:02:52.075 CC app/spdk_dd/spdk_dd.o 00:02:52.075 CXX test/cpp_headers/blob.o 00:02:52.075 CXX test/cpp_headers/conf.o 00:02:52.075 CXX test/cpp_headers/config.o 00:02:52.075 CXX test/cpp_headers/cpuset.o 00:02:52.075 CXX test/cpp_headers/crc16.o 00:02:52.075 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.075 CC app/nvmf_tgt/nvmf_main.o 00:02:52.075 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.075 CXX test/cpp_headers/crc32.o 00:02:52.075 CC app/spdk_tgt/spdk_tgt.o 00:02:52.075 CC test/app/jsoncat/jsoncat.o 00:02:52.075 CC test/env/vtophys/vtophys.o 00:02:52.075 CC test/thread/poller_perf/poller_perf.o 00:02:52.075 CC examples/ioat/verify/verify.o 00:02:52.075 CC test/env/memory/memory_ut.o 00:02:52.075 CC examples/ioat/perf/perf.o 00:02:52.075 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:52.075 CC examples/util/zipf/zipf.o 00:02:52.075 CC test/env/pci/pci_ut.o 00:02:52.075 CC test/app/histogram_perf/histogram_perf.o 00:02:52.075 CC test/app/stub/stub.o 00:02:52.075 CC app/fio/nvme/fio_plugin.o 00:02:52.332 CC test/app/bdev_svc/bdev_svc.o 00:02:52.332 CC test/dma/test_dma/test_dma.o 00:02:52.332 CC app/fio/bdev/fio_plugin.o 00:02:52.332 LINK spdk_lspci 00:02:52.332 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:52.332 CC test/env/mem_callbacks/mem_callbacks.o 00:02:52.332 LINK rpc_client_test 00:02:52.332 LINK spdk_nvme_discover 00:02:52.332 LINK jsoncat 00:02:52.332 LINK poller_perf 00:02:52.595 LINK vtophys 00:02:52.595 CXX test/cpp_headers/crc64.o 00:02:52.595 LINK histogram_perf 00:02:52.595 LINK zipf 00:02:52.595 CXX test/cpp_headers/dif.o 00:02:52.595 LINK interrupt_tgt 00:02:52.595 CXX test/cpp_headers/dma.o 00:02:52.595 LINK nvmf_tgt 00:02:52.595 CXX test/cpp_headers/endian.o 00:02:52.595 CXX test/cpp_headers/env_dpdk.o 00:02:52.595 CXX test/cpp_headers/env.o 00:02:52.595 CXX test/cpp_headers/event.o 00:02:52.595 LINK spdk_trace_record 00:02:52.595 LINK env_dpdk_post_init 00:02:52.595 CXX test/cpp_headers/fd_group.o 00:02:52.595 CXX test/cpp_headers/fd.o 00:02:52.595 CXX test/cpp_headers/file.o 00:02:52.595 CXX test/cpp_headers/ftl.o 00:02:52.595 LINK stub 00:02:52.595 CXX test/cpp_headers/gpt_spec.o 00:02:52.595 LINK iscsi_tgt 00:02:52.595 CXX test/cpp_headers/hexlify.o 00:02:52.595 CXX test/cpp_headers/histogram_data.o 00:02:52.595 LINK spdk_tgt 00:02:52.595 CXX test/cpp_headers/idxd.o 00:02:52.595 CXX test/cpp_headers/idxd_spec.o 00:02:52.595 LINK bdev_svc 00:02:52.595 LINK verify 00:02:52.595 LINK ioat_perf 00:02:52.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:52.595 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:52.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:52.857 CXX test/cpp_headers/init.o 00:02:52.857 CXX test/cpp_headers/ioat.o 00:02:52.857 CXX test/cpp_headers/ioat_spec.o 00:02:52.857 CXX test/cpp_headers/iscsi_spec.o 00:02:52.857 LINK spdk_dd 00:02:52.857 CXX test/cpp_headers/json.o 00:02:52.857 LINK spdk_trace 00:02:52.857 CXX test/cpp_headers/jsonrpc.o 00:02:52.857 CXX test/cpp_headers/keyring.o 00:02:52.857 CXX test/cpp_headers/keyring_module.o 00:02:52.857 CXX test/cpp_headers/likely.o 00:02:52.857 CXX test/cpp_headers/log.o 00:02:52.857 CXX test/cpp_headers/lvol.o 00:02:52.857 CXX test/cpp_headers/memory.o 00:02:52.857 CXX test/cpp_headers/mmio.o 00:02:52.857 CXX test/cpp_headers/nbd.o 00:02:52.857 CXX test/cpp_headers/notify.o 00:02:52.857 CXX test/cpp_headers/nvme.o 00:02:52.857 LINK pci_ut 00:02:52.857 CXX test/cpp_headers/nvme_intel.o 00:02:52.857 CXX test/cpp_headers/nvme_ocssd.o 00:02:52.857 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:52.857 CXX test/cpp_headers/nvme_spec.o 00:02:52.857 CXX test/cpp_headers/nvme_zns.o 00:02:52.857 CXX test/cpp_headers/nvmf_cmd.o 00:02:52.857 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:53.127 CXX test/cpp_headers/nvmf.o 00:02:53.127 CXX test/cpp_headers/nvmf_spec.o 00:02:53.127 LINK test_dma 00:02:53.127 CXX test/cpp_headers/nvmf_transport.o 00:02:53.127 CXX test/cpp_headers/opal.o 00:02:53.127 CC test/event/event_perf/event_perf.o 00:02:53.127 CC test/event/reactor/reactor.o 00:02:53.127 CC test/event/reactor_perf/reactor_perf.o 00:02:53.127 LINK nvme_fuzz 00:02:53.127 CXX test/cpp_headers/opal_spec.o 00:02:53.127 CXX test/cpp_headers/pci_ids.o 00:02:53.127 LINK spdk_bdev 00:02:53.127 CXX test/cpp_headers/pipe.o 00:02:53.127 CXX test/cpp_headers/queue.o 00:02:53.127 CC examples/idxd/perf/perf.o 00:02:53.127 CC examples/sock/hello_world/hello_sock.o 00:02:53.127 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.387 CC test/event/app_repeat/app_repeat.o 00:02:53.387 CC examples/thread/thread/thread_ex.o 00:02:53.387 CC examples/vmd/led/led.o 00:02:53.387 CXX test/cpp_headers/reduce.o 00:02:53.387 CXX test/cpp_headers/rpc.o 00:02:53.387 CXX test/cpp_headers/scheduler.o 00:02:53.387 LINK spdk_nvme 00:02:53.387 CXX test/cpp_headers/scsi.o 00:02:53.387 CXX test/cpp_headers/scsi_spec.o 00:02:53.387 CXX test/cpp_headers/sock.o 00:02:53.387 CXX test/cpp_headers/stdinc.o 00:02:53.387 CXX test/cpp_headers/string.o 00:02:53.387 CXX test/cpp_headers/thread.o 00:02:53.387 CXX test/cpp_headers/trace.o 00:02:53.387 CXX test/cpp_headers/trace_parser.o 00:02:53.387 LINK vhost_fuzz 00:02:53.387 CXX test/cpp_headers/tree.o 00:02:53.387 CC test/event/scheduler/scheduler.o 00:02:53.387 CXX test/cpp_headers/ublk.o 00:02:53.387 CXX test/cpp_headers/util.o 00:02:53.387 CXX test/cpp_headers/uuid.o 00:02:53.387 CXX test/cpp_headers/version.o 00:02:53.387 LINK reactor 00:02:53.387 CXX test/cpp_headers/vfio_user_pci.o 00:02:53.387 CXX test/cpp_headers/vfio_user_spec.o 00:02:53.387 CXX test/cpp_headers/vhost.o 00:02:53.387 CXX test/cpp_headers/vmd.o 00:02:53.387 LINK reactor_perf 00:02:53.387 CXX test/cpp_headers/xor.o 00:02:53.388 CXX test/cpp_headers/zipf.o 00:02:53.388 LINK event_perf 00:02:53.388 CC app/vhost/vhost.o 00:02:53.388 LINK spdk_nvme_perf 00:02:53.647 LINK lsvmd 00:02:53.647 LINK mem_callbacks 00:02:53.647 LINK led 00:02:53.647 LINK app_repeat 00:02:53.647 LINK spdk_nvme_identify 00:02:53.647 LINK spdk_top 00:02:53.647 LINK hello_sock 00:02:53.647 LINK thread 00:02:53.905 CC test/nvme/aer/aer.o 00:02:53.905 CC test/nvme/sgl/sgl.o 00:02:53.905 CC test/nvme/overhead/overhead.o 00:02:53.905 CC test/nvme/startup/startup.o 00:02:53.905 CC test/nvme/reset/reset.o 00:02:53.905 CC test/nvme/e2edp/nvme_dp.o 00:02:53.905 CC test/nvme/err_injection/err_injection.o 00:02:53.905 CC test/nvme/reserve/reserve.o 00:02:53.905 CC test/nvme/simple_copy/simple_copy.o 00:02:53.905 CC test/nvme/connect_stress/connect_stress.o 00:02:53.905 CC test/nvme/boot_partition/boot_partition.o 00:02:53.905 LINK vhost 00:02:53.905 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:53.905 CC test/nvme/fused_ordering/fused_ordering.o 00:02:53.905 LINK scheduler 00:02:53.906 CC test/blobfs/mkfs/mkfs.o 00:02:53.906 CC test/nvme/cuse/cuse.o 00:02:53.906 CC test/nvme/fdp/fdp.o 00:02:53.906 CC test/nvme/compliance/nvme_compliance.o 00:02:53.906 CC test/accel/dif/dif.o 00:02:53.906 LINK idxd_perf 00:02:53.906 CC test/lvol/esnap/esnap.o 00:02:54.164 LINK boot_partition 00:02:54.164 LINK connect_stress 00:02:54.164 LINK doorbell_aers 00:02:54.164 LINK fused_ordering 00:02:54.164 LINK reserve 00:02:54.164 LINK simple_copy 00:02:54.164 LINK startup 00:02:54.164 LINK err_injection 00:02:54.164 LINK sgl 00:02:54.164 LINK nvme_dp 00:02:54.164 LINK aer 00:02:54.164 LINK reset 00:02:54.164 LINK mkfs 00:02:54.164 LINK overhead 00:02:54.164 CC examples/nvme/reconnect/reconnect.o 00:02:54.164 CC examples/nvme/abort/abort.o 00:02:54.164 CC examples/nvme/arbitration/arbitration.o 00:02:54.164 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:54.164 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:54.164 CC examples/nvme/hotplug/hotplug.o 00:02:54.164 CC examples/nvme/hello_world/hello_world.o 00:02:54.164 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.164 LINK nvme_compliance 00:02:54.421 LINK fdp 00:02:54.421 CC examples/accel/perf/accel_perf.o 00:02:54.421 CC examples/blob/cli/blobcli.o 00:02:54.421 LINK dif 00:02:54.421 CC examples/blob/hello_world/hello_blob.o 00:02:54.421 LINK pmr_persistence 00:02:54.421 LINK memory_ut 00:02:54.421 LINK cmb_copy 00:02:54.677 LINK hello_world 00:02:54.677 LINK arbitration 00:02:54.677 LINK hotplug 00:02:54.677 LINK reconnect 00:02:54.677 LINK abort 00:02:54.677 LINK hello_blob 00:02:54.933 LINK nvme_manage 00:02:54.933 LINK accel_perf 00:02:54.933 CC test/bdev/bdevio/bdevio.o 00:02:54.933 LINK blobcli 00:02:55.191 LINK iscsi_fuzz 00:02:55.191 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.191 CC examples/bdev/bdevperf/bdevperf.o 00:02:55.191 LINK bdevio 00:02:55.455 LINK cuse 00:02:55.455 LINK hello_bdev 00:02:56.024 LINK bdevperf 00:02:56.281 CC examples/nvmf/nvmf/nvmf.o 00:02:56.847 LINK nvmf 00:02:58.748 LINK esnap 00:02:59.312 00:02:59.312 real 0m48.766s 00:02:59.312 user 10m6.090s 00:02:59.312 sys 2m28.518s 00:02:59.312 11:29:07 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:59.312 11:29:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.312 ************************************ 00:02:59.312 END TEST make 00:02:59.312 ************************************ 00:02:59.312 11:29:07 -- common/autotest_common.sh@1142 -- $ return 0 00:02:59.312 11:29:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.312 11:29:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.312 11:29:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.312 11:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.312 11:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.312 11:29:07 -- pm/common@44 -- $ pid=2815719 00:02:59.312 11:29:07 -- pm/common@50 -- $ kill -TERM 2815719 00:02:59.312 11:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.312 11:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.312 11:29:07 -- pm/common@44 -- $ pid=2815721 00:02:59.312 11:29:07 -- pm/common@50 -- $ kill -TERM 2815721 00:02:59.312 11:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.312 11:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.312 11:29:07 -- pm/common@44 -- $ pid=2815723 00:02:59.312 11:29:07 -- pm/common@50 -- $ kill -TERM 2815723 00:02:59.312 11:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.312 11:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.312 11:29:07 -- pm/common@44 -- $ pid=2815752 00:02:59.312 11:29:07 -- pm/common@50 -- $ sudo -E kill -TERM 2815752 00:02:59.312 11:29:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:59.312 11:29:07 -- nvmf/common.sh@7 -- # uname -s 00:02:59.312 11:29:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.312 11:29:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.312 11:29:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.312 11:29:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.312 11:29:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.312 11:29:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.312 11:29:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.312 11:29:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.312 11:29:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.312 11:29:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.312 11:29:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:59.312 11:29:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:59.312 11:29:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.312 11:29:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.312 11:29:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:59.312 11:29:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:59.312 11:29:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:59.312 11:29:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.312 11:29:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.312 11:29:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.312 11:29:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.312 11:29:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.312 11:29:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.312 11:29:07 -- paths/export.sh@5 -- # export PATH 00:02:59.313 11:29:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.313 11:29:07 -- nvmf/common.sh@47 -- # : 0 00:02:59.313 11:29:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:59.313 11:29:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:59.313 11:29:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:59.313 11:29:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.313 11:29:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.313 11:29:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:59.313 11:29:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:59.313 11:29:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:59.313 11:29:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.313 11:29:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.313 11:29:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.313 11:29:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.313 11:29:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.313 11:29:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.313 11:29:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.313 11:29:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.313 11:29:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.313 11:29:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.313 11:29:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2871077 00:02:59.313 11:29:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.313 11:29:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:59.313 11:29:07 -- pm/common@17 -- # local monitor 00:02:59.313 11:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.313 11:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.313 11:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.313 11:29:07 -- pm/common@21 -- # date +%s 00:02:59.313 11:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.313 11:29:07 -- pm/common@21 -- # date +%s 00:02:59.313 11:29:07 -- pm/common@25 -- # sleep 1 00:02:59.313 11:29:07 -- pm/common@21 -- # date +%s 00:02:59.313 11:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035747 00:02:59.313 11:29:07 -- pm/common@21 -- # date +%s 00:02:59.313 11:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035747 00:02:59.313 11:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035747 00:02:59.313 11:29:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035747 00:02:59.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035747_collect-vmstat.pm.log 00:02:59.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035747_collect-cpu-load.pm.log 00:02:59.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035747_collect-cpu-temp.pm.log 00:02:59.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035747_collect-bmc-pm.bmc.pm.log 00:03:00.248 11:29:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.248 11:29:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.248 11:29:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:00.248 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:03:00.248 11:29:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.248 11:29:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:00.248 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:03:00.248 11:29:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:00.248 11:29:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.248 11:29:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.248 11:29:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:00.248 11:29:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.248 11:29:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:00.248 11:29:08 -- common/autotest_common.sh@1455 -- # uname 00:03:00.248 11:29:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:00.248 11:29:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:00.248 11:29:08 -- common/autotest_common.sh@1475 -- # uname 00:03:00.248 11:29:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:00.248 11:29:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:00.248 11:29:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:00.248 11:29:08 -- spdk/autotest.sh@72 -- # hash lcov 00:03:00.248 11:29:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:00.248 11:29:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:00.248 --rc lcov_branch_coverage=1 00:03:00.248 --rc lcov_function_coverage=1 00:03:00.248 --rc genhtml_branch_coverage=1 00:03:00.248 --rc genhtml_function_coverage=1 00:03:00.248 --rc genhtml_legend=1 00:03:00.248 --rc geninfo_all_blocks=1 00:03:00.248 ' 00:03:00.248 11:29:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:00.248 --rc lcov_branch_coverage=1 00:03:00.248 --rc lcov_function_coverage=1 00:03:00.248 --rc genhtml_branch_coverage=1 00:03:00.248 --rc genhtml_function_coverage=1 00:03:00.248 --rc genhtml_legend=1 00:03:00.248 --rc geninfo_all_blocks=1 00:03:00.248 ' 00:03:00.505 11:29:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:00.505 --rc lcov_branch_coverage=1 00:03:00.505 --rc lcov_function_coverage=1 00:03:00.505 --rc genhtml_branch_coverage=1 00:03:00.505 --rc genhtml_function_coverage=1 00:03:00.505 --rc genhtml_legend=1 00:03:00.505 --rc geninfo_all_blocks=1 00:03:00.505 --no-external' 00:03:00.505 11:29:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:00.505 --rc lcov_branch_coverage=1 00:03:00.505 --rc lcov_function_coverage=1 00:03:00.505 --rc genhtml_branch_coverage=1 00:03:00.505 --rc genhtml_function_coverage=1 00:03:00.505 --rc genhtml_legend=1 00:03:00.505 --rc geninfo_all_blocks=1 00:03:00.505 --no-external' 00:03:00.505 11:29:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:00.505 lcov: LCOV version 1.14 00:03:00.505 11:29:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:18.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:18.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:30.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:30.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:30.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:30.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:35.037 11:29:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:35.037 11:29:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:35.037 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:03:35.037 11:29:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:35.037 11:29:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.603 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:03:35.603 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:35.603 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:35.603 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:35.603 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:35.603 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:35.603 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:35.603 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:35.603 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:35.603 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:35.603 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:35.603 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:35.603 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:35.603 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:35.603 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:35.603 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:35.861 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:35.861 11:29:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:35.861 11:29:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:35.861 11:29:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:35.861 11:29:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:35.861 11:29:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.861 11:29:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:35.861 11:29:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:35.861 11:29:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.861 11:29:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.861 11:29:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:35.861 11:29:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.861 11:29:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.861 11:29:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:35.861 11:29:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:35.861 11:29:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.861 No valid GPT data, bailing 00:03:35.861 11:29:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.861 11:29:43 -- scripts/common.sh@391 -- # pt= 00:03:35.861 11:29:43 -- scripts/common.sh@392 -- # return 1 00:03:35.861 11:29:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.861 1+0 records in 00:03:35.861 1+0 records out 00:03:35.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00176286 s, 595 MB/s 00:03:35.861 11:29:43 -- spdk/autotest.sh@118 -- # sync 00:03:35.861 11:29:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.861 11:29:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.861 11:29:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.765 11:29:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:37.765 11:29:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:37.765 11:29:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:37.765 11:29:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.765 11:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.765 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:03:37.765 ************************************ 00:03:37.765 START TEST setup.sh 00:03:37.765 ************************************ 00:03:37.765 11:29:45 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:37.765 * Looking for test storage... 00:03:37.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.765 11:29:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:37.765 11:29:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:37.765 11:29:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:37.765 11:29:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.765 11:29:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.765 11:29:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.765 ************************************ 00:03:37.765 START TEST acl 00:03:37.765 ************************************ 00:03:37.765 11:29:45 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:38.024 * Looking for test storage... 00:03:38.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.024 11:29:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:38.024 11:29:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:38.024 11:29:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.024 11:29:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.401 11:29:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.401 11:29:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.401 11:29:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.401 11:29:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.401 11:29:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.401 11:29:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:40.776 Hugepages 00:03:40.776 node hugesize free / total 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 00:03:40.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.776 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:40.777 11:29:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.777 11:29:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.777 11:29:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.777 11:29:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.777 ************************************ 00:03:40.777 START TEST denied 00:03:40.777 ************************************ 00:03:40.777 11:29:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:40.777 11:29:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:03:40.777 11:29:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.777 11:29:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:03:40.777 11:29:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.777 11:29:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.155 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.155 11:29:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.685 00:03:44.685 real 0m3.980s 00:03:44.685 user 0m1.114s 00:03:44.685 sys 0m1.914s 00:03:44.685 11:29:52 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.685 11:29:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:44.685 ************************************ 00:03:44.685 END TEST denied 00:03:44.685 ************************************ 00:03:44.685 11:29:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:44.685 11:29:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:44.685 11:29:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.944 11:29:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.944 11:29:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.944 ************************************ 00:03:44.944 START TEST allowed 00:03:44.944 ************************************ 00:03:44.944 11:29:52 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:44.944 11:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:03:44.944 11:29:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:03:44.944 11:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:44.944 11:29:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.944 11:29:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.535 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.535 11:29:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:47.535 11:29:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:47.535 11:29:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:47.536 11:29:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.536 11:29:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.913 00:03:48.913 real 0m3.997s 00:03:48.913 user 0m0.988s 00:03:48.913 sys 0m1.826s 00:03:48.913 11:29:56 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.913 11:29:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:48.913 ************************************ 00:03:48.913 END TEST allowed 00:03:48.913 ************************************ 00:03:48.913 11:29:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:48.913 00:03:48.913 real 0m11.013s 00:03:48.913 user 0m3.319s 00:03:48.913 sys 0m5.655s 00:03:48.913 11:29:56 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.913 11:29:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.914 ************************************ 00:03:48.914 END TEST acl 00:03:48.914 ************************************ 00:03:48.914 11:29:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.914 11:29:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.914 11:29:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.914 11:29:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.914 11:29:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.914 ************************************ 00:03:48.914 START TEST hugepages 00:03:48.914 ************************************ 00:03:48.914 11:29:56 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.914 * Looking for test storage... 00:03:48.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 27614920 kB' 'MemAvailable: 31184856 kB' 'Buffers: 2704 kB' 'Cached: 9794060 kB' 'SwapCached: 0 kB' 'Active: 6809872 kB' 'Inactive: 3505248 kB' 'Active(anon): 6420332 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521228 kB' 'Mapped: 215524 kB' 'Shmem: 5901976 kB' 'KReclaimable: 173944 kB' 'Slab: 511124 kB' 'SReclaimable: 173944 kB' 'SUnreclaim: 337180 kB' 'KernelStack: 12400 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7549492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195392 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.914 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.915 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.916 11:29:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:48.916 11:29:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.916 11:29:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.916 11:29:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.916 ************************************ 00:03:48.916 START TEST default_setup 00:03:48.916 ************************************ 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.916 11:29:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.294 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:50.294 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:50.294 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:51.232 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29698036 kB' 'MemAvailable: 33267972 kB' 'Buffers: 2704 kB' 'Cached: 9794156 kB' 'SwapCached: 0 kB' 'Active: 6828812 kB' 'Inactive: 3505248 kB' 'Active(anon): 6439272 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540332 kB' 'Mapped: 215524 kB' 'Shmem: 5902072 kB' 'KReclaimable: 173944 kB' 'Slab: 510848 kB' 'SReclaimable: 173944 kB' 'SUnreclaim: 336904 kB' 'KernelStack: 12496 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7569980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.232 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29697900 kB' 'MemAvailable: 33267836 kB' 'Buffers: 2704 kB' 'Cached: 9794160 kB' 'SwapCached: 0 kB' 'Active: 6828144 kB' 'Inactive: 3505248 kB' 'Active(anon): 6438604 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539696 kB' 'Mapped: 215504 kB' 'Shmem: 5902076 kB' 'KReclaimable: 173944 kB' 'Slab: 510832 kB' 'SReclaimable: 173944 kB' 'SUnreclaim: 336888 kB' 'KernelStack: 12208 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7570000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.233 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.234 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.495 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29698192 kB' 'MemAvailable: 33268128 kB' 'Buffers: 2704 kB' 'Cached: 9794160 kB' 'SwapCached: 0 kB' 'Active: 6827844 kB' 'Inactive: 3505248 kB' 'Active(anon): 6438304 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539448 kB' 'Mapped: 215504 kB' 'Shmem: 5902076 kB' 'KReclaimable: 173944 kB' 'Slab: 510920 kB' 'SReclaimable: 173944 kB' 'SUnreclaim: 336976 kB' 'KernelStack: 12176 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7570020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.496 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.497 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.498 nr_hugepages=1024 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.498 resv_hugepages=0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.498 surplus_hugepages=0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.498 anon_hugepages=0 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29698644 kB' 'MemAvailable: 33268580 kB' 'Buffers: 2704 kB' 'Cached: 9794200 kB' 'SwapCached: 0 kB' 'Active: 6827668 kB' 'Inactive: 3505248 kB' 'Active(anon): 6438128 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539248 kB' 'Mapped: 215460 kB' 'Shmem: 5902116 kB' 'KReclaimable: 173944 kB' 'Slab: 511072 kB' 'SReclaimable: 173944 kB' 'SUnreclaim: 337128 kB' 'KernelStack: 12288 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7570044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.498 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.499 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.500 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20649624 kB' 'MemUsed: 3922732 kB' 'SwapCached: 0 kB' 'Active: 1169456 kB' 'Inactive: 72500 kB' 'Active(anon): 1040188 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 951896 kB' 'Mapped: 78408 kB' 'AnonPages: 293280 kB' 'Shmem: 750128 kB' 'KernelStack: 6776 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45600 kB' 'Slab: 192332 kB' 'SReclaimable: 45600 kB' 'SUnreclaim: 146732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.501 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.502 node0=1024 expecting 1024 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.502 00:03:51.502 real 0m2.413s 00:03:51.502 user 0m0.670s 00:03:51.502 sys 0m0.873s 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.502 11:29:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:51.502 ************************************ 00:03:51.502 END TEST default_setup 00:03:51.502 ************************************ 00:03:51.502 11:29:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.502 11:29:59 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:51.502 11:29:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.502 11:29:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.502 11:29:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.502 ************************************ 00:03:51.502 START TEST per_node_1G_alloc 00:03:51.502 ************************************ 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.502 11:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.879 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.879 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.879 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.879 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.879 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.879 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.879 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.879 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.879 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:52.879 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.879 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.879 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.879 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.879 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.879 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.879 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.879 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29677756 kB' 'MemAvailable: 33247680 kB' 'Buffers: 2704 kB' 'Cached: 9794276 kB' 'SwapCached: 0 kB' 'Active: 6829796 kB' 'Inactive: 3505248 kB' 'Active(anon): 6440256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541380 kB' 'Mapped: 215620 kB' 'Shmem: 5902192 kB' 'KReclaimable: 173920 kB' 'Slab: 510920 kB' 'SReclaimable: 173920 kB' 'SUnreclaim: 337000 kB' 'KernelStack: 12384 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7575868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.879 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29677836 kB' 'MemAvailable: 33247760 kB' 'Buffers: 2704 kB' 'Cached: 9794280 kB' 'SwapCached: 0 kB' 'Active: 6832820 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443280 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544420 kB' 'Mapped: 215912 kB' 'Shmem: 5902196 kB' 'KReclaimable: 173920 kB' 'Slab: 510888 kB' 'SReclaimable: 173920 kB' 'SUnreclaim: 336968 kB' 'KernelStack: 12400 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7578660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.880 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.881 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29678344 kB' 'MemAvailable: 33248268 kB' 'Buffers: 2704 kB' 'Cached: 9794296 kB' 'SwapCached: 0 kB' 'Active: 6835576 kB' 'Inactive: 3505248 kB' 'Active(anon): 6446036 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547052 kB' 'Mapped: 215912 kB' 'Shmem: 5902212 kB' 'KReclaimable: 173920 kB' 'Slab: 510888 kB' 'SReclaimable: 173920 kB' 'SUnreclaim: 336968 kB' 'KernelStack: 12352 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7580700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195668 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.882 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.883 nr_hugepages=1024 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.883 resv_hugepages=0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.883 surplus_hugepages=0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.883 anon_hugepages=0 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29678512 kB' 'MemAvailable: 33248428 kB' 'Buffers: 2704 kB' 'Cached: 9794320 kB' 'SwapCached: 0 kB' 'Active: 6835168 kB' 'Inactive: 3505248 kB' 'Active(anon): 6445628 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546688 kB' 'Mapped: 216344 kB' 'Shmem: 5902236 kB' 'KReclaimable: 173904 kB' 'Slab: 510904 kB' 'SReclaimable: 173904 kB' 'SUnreclaim: 337000 kB' 'KernelStack: 12384 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7580724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195620 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.883 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21678804 kB' 'MemUsed: 2893552 kB' 'SwapCached: 0 kB' 'Active: 1171556 kB' 'Inactive: 72500 kB' 'Active(anon): 1042288 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952028 kB' 'Mapped: 78408 kB' 'AnonPages: 295200 kB' 'Shmem: 750260 kB' 'KernelStack: 6808 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192388 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.884 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7991644 kB' 'MemUsed: 11462672 kB' 'SwapCached: 0 kB' 'Active: 5663744 kB' 'Inactive: 3432748 kB' 'Active(anon): 5403472 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8845020 kB' 'Mapped: 137504 kB' 'AnonPages: 252016 kB' 'Shmem: 5152000 kB' 'KernelStack: 5592 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128328 kB' 'Slab: 318508 kB' 'SReclaimable: 128328 kB' 'SUnreclaim: 190180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.885 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.886 node0=512 expecting 512 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.886 node1=512 expecting 512 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.886 00:03:52.886 real 0m1.417s 00:03:52.886 user 0m0.581s 00:03:52.886 sys 0m0.809s 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.886 11:30:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.886 ************************************ 00:03:52.886 END TEST per_node_1G_alloc 00:03:52.886 ************************************ 00:03:52.886 11:30:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.886 11:30:00 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.886 11:30:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.886 11:30:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.886 11:30:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.886 ************************************ 00:03:52.886 START TEST even_2G_alloc 00:03:52.886 ************************************ 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.886 11:30:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.266 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:54.266 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.266 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:54.266 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:54.266 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:54.266 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:54.266 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:54.266 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:54.266 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:54.266 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:54.266 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:54.266 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:54.266 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:54.266 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:54.266 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:54.266 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:54.266 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632336 kB' 'MemAvailable: 33202244 kB' 'Buffers: 2704 kB' 'Cached: 9794408 kB' 'SwapCached: 0 kB' 'Active: 6835136 kB' 'Inactive: 3505248 kB' 'Active(anon): 6445596 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546488 kB' 'Mapped: 216376 kB' 'Shmem: 5902324 kB' 'KReclaimable: 173888 kB' 'Slab: 510688 kB' 'SReclaimable: 173888 kB' 'SUnreclaim: 336800 kB' 'KernelStack: 12368 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7579816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195684 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.266 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.267 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29636140 kB' 'MemAvailable: 33206048 kB' 'Buffers: 2704 kB' 'Cached: 9794412 kB' 'SwapCached: 0 kB' 'Active: 6835452 kB' 'Inactive: 3505248 kB' 'Active(anon): 6445912 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546820 kB' 'Mapped: 216328 kB' 'Shmem: 5902328 kB' 'KReclaimable: 173888 kB' 'Slab: 510668 kB' 'SReclaimable: 173888 kB' 'SUnreclaim: 336780 kB' 'KernelStack: 12400 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7579836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.268 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.269 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29636212 kB' 'MemAvailable: 33206120 kB' 'Buffers: 2704 kB' 'Cached: 9794412 kB' 'SwapCached: 0 kB' 'Active: 6835040 kB' 'Inactive: 3505248 kB' 'Active(anon): 6445500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546416 kB' 'Mapped: 216252 kB' 'Shmem: 5902328 kB' 'KReclaimable: 173888 kB' 'Slab: 510660 kB' 'SReclaimable: 173888 kB' 'SUnreclaim: 336772 kB' 'KernelStack: 12416 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7579856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.270 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.271 nr_hugepages=1024 00:03:54.271 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.271 resv_hugepages=0 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.272 surplus_hugepages=0 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.272 anon_hugepages=0 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29635456 kB' 'MemAvailable: 33205364 kB' 'Buffers: 2704 kB' 'Cached: 9794448 kB' 'SwapCached: 0 kB' 'Active: 6835708 kB' 'Inactive: 3505248 kB' 'Active(anon): 6446168 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547032 kB' 'Mapped: 216252 kB' 'Shmem: 5902364 kB' 'KReclaimable: 173888 kB' 'Slab: 510660 kB' 'SReclaimable: 173888 kB' 'SUnreclaim: 336772 kB' 'KernelStack: 12432 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7579880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.272 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.273 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21663892 kB' 'MemUsed: 2908464 kB' 'SwapCached: 0 kB' 'Active: 1170600 kB' 'Inactive: 72500 kB' 'Active(anon): 1041332 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952132 kB' 'Mapped: 78408 kB' 'AnonPages: 294104 kB' 'Shmem: 750364 kB' 'KernelStack: 6856 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192304 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.274 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7971868 kB' 'MemUsed: 11482448 kB' 'SwapCached: 0 kB' 'Active: 5664504 kB' 'Inactive: 3432748 kB' 'Active(anon): 5404232 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8845024 kB' 'Mapped: 137844 kB' 'AnonPages: 252324 kB' 'Shmem: 5152004 kB' 'KernelStack: 5560 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128312 kB' 'Slab: 318356 kB' 'SReclaimable: 128312 kB' 'SUnreclaim: 190044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.275 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.276 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.277 node0=512 expecting 512 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.277 node1=512 expecting 512 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.277 00:03:54.277 real 0m1.390s 00:03:54.277 user 0m0.568s 00:03:54.277 sys 0m0.797s 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.277 11:30:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.277 ************************************ 00:03:54.277 END TEST even_2G_alloc 00:03:54.277 ************************************ 00:03:54.277 11:30:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.277 11:30:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.277 11:30:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.277 11:30:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.277 11:30:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.536 ************************************ 00:03:54.536 START TEST odd_alloc 00:03:54.536 ************************************ 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.536 11:30:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.922 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.922 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.922 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.922 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.922 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.922 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.922 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.922 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.922 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.922 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.922 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.922 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.922 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.922 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.922 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.922 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.922 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29631340 kB' 'MemAvailable: 33201228 kB' 'Buffers: 2704 kB' 'Cached: 9794544 kB' 'SwapCached: 0 kB' 'Active: 6833836 kB' 'Inactive: 3505248 kB' 'Active(anon): 6444296 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544872 kB' 'Mapped: 215856 kB' 'Shmem: 5902460 kB' 'KReclaimable: 173848 kB' 'Slab: 510596 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336748 kB' 'KernelStack: 12544 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7568956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195828 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.922 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.923 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632108 kB' 'MemAvailable: 33201996 kB' 'Buffers: 2704 kB' 'Cached: 9794548 kB' 'SwapCached: 0 kB' 'Active: 6833452 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443912 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544420 kB' 'Mapped: 215544 kB' 'Shmem: 5902464 kB' 'KReclaimable: 173848 kB' 'Slab: 510556 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336708 kB' 'KernelStack: 12752 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7567612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.924 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29630680 kB' 'MemAvailable: 33200568 kB' 'Buffers: 2704 kB' 'Cached: 9794548 kB' 'SwapCached: 0 kB' 'Active: 6833016 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443476 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544212 kB' 'Mapped: 215536 kB' 'Shmem: 5902464 kB' 'KReclaimable: 173848 kB' 'Slab: 510556 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336708 kB' 'KernelStack: 12496 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7566636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195812 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.925 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.926 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.927 nr_hugepages=1025 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.927 resv_hugepages=0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.927 surplus_hugepages=0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.927 anon_hugepages=0 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29631376 kB' 'MemAvailable: 33201264 kB' 'Buffers: 2704 kB' 'Cached: 9794584 kB' 'SwapCached: 0 kB' 'Active: 6832040 kB' 'Inactive: 3505248 kB' 'Active(anon): 6442500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543172 kB' 'Mapped: 215460 kB' 'Shmem: 5902500 kB' 'KReclaimable: 173848 kB' 'Slab: 510516 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336668 kB' 'KernelStack: 12208 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7566656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195700 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.927 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.928 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21647816 kB' 'MemUsed: 2924540 kB' 'SwapCached: 0 kB' 'Active: 1170492 kB' 'Inactive: 72500 kB' 'Active(anon): 1041224 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952276 kB' 'Mapped: 78408 kB' 'AnonPages: 293896 kB' 'Shmem: 750508 kB' 'KernelStack: 6856 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192212 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.929 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7983312 kB' 'MemUsed: 11471004 kB' 'SwapCached: 0 kB' 'Active: 5661788 kB' 'Inactive: 3432748 kB' 'Active(anon): 5401516 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8845036 kB' 'Mapped: 137004 kB' 'AnonPages: 249564 kB' 'Shmem: 5152016 kB' 'KernelStack: 5544 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128272 kB' 'Slab: 318276 kB' 'SReclaimable: 128272 kB' 'SUnreclaim: 190004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.930 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.931 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.932 node0=512 expecting 513 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.932 node1=513 expecting 512 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.932 00:03:55.932 real 0m1.554s 00:03:55.932 user 0m0.655s 00:03:55.932 sys 0m0.876s 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.932 11:30:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.932 ************************************ 00:03:55.932 END TEST odd_alloc 00:03:55.932 ************************************ 00:03:55.932 11:30:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.932 11:30:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.932 11:30:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.932 11:30:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.932 11:30:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.932 ************************************ 00:03:55.932 START TEST custom_alloc 00:03:55.932 ************************************ 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.932 11:30:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.314 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:57.314 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.314 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:57.314 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:57.314 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:57.314 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:57.314 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:57.314 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:57.314 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:57.314 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:57.314 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:57.314 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:57.314 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:57.314 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:57.314 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:57.314 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:57.314 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28587576 kB' 'MemAvailable: 32157464 kB' 'Buffers: 2704 kB' 'Cached: 9794676 kB' 'SwapCached: 0 kB' 'Active: 6832400 kB' 'Inactive: 3505248 kB' 'Active(anon): 6442860 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543668 kB' 'Mapped: 215564 kB' 'Shmem: 5902592 kB' 'KReclaimable: 173848 kB' 'Slab: 510428 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336580 kB' 'KernelStack: 12416 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7566856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195716 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.314 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28588512 kB' 'MemAvailable: 32158400 kB' 'Buffers: 2704 kB' 'Cached: 9794680 kB' 'SwapCached: 0 kB' 'Active: 6832488 kB' 'Inactive: 3505248 kB' 'Active(anon): 6442948 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543800 kB' 'Mapped: 215496 kB' 'Shmem: 5902596 kB' 'KReclaimable: 173848 kB' 'Slab: 510420 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336572 kB' 'KernelStack: 12448 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7566876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195684 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.315 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.316 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28588756 kB' 'MemAvailable: 32158644 kB' 'Buffers: 2704 kB' 'Cached: 9794680 kB' 'SwapCached: 0 kB' 'Active: 6832040 kB' 'Inactive: 3505248 kB' 'Active(anon): 6442500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543320 kB' 'Mapped: 215420 kB' 'Shmem: 5902596 kB' 'KReclaimable: 173848 kB' 'Slab: 510404 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336556 kB' 'KernelStack: 12432 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7566896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195684 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.317 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.579 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.580 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:57.581 nr_hugepages=1536 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.581 resv_hugepages=0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.581 surplus_hugepages=0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.581 anon_hugepages=0 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28588740 kB' 'MemAvailable: 32158628 kB' 'Buffers: 2704 kB' 'Cached: 9794720 kB' 'SwapCached: 0 kB' 'Active: 6832416 kB' 'Inactive: 3505248 kB' 'Active(anon): 6442876 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543640 kB' 'Mapped: 215420 kB' 'Shmem: 5902636 kB' 'KReclaimable: 173848 kB' 'Slab: 510404 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336556 kB' 'KernelStack: 12448 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7566916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195684 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.581 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.582 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21654764 kB' 'MemUsed: 2917592 kB' 'SwapCached: 0 kB' 'Active: 1170660 kB' 'Inactive: 72500 kB' 'Active(anon): 1041392 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952396 kB' 'Mapped: 78408 kB' 'AnonPages: 294016 kB' 'Shmem: 750628 kB' 'KernelStack: 6920 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192176 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.583 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 6933700 kB' 'MemUsed: 12520616 kB' 'SwapCached: 0 kB' 'Active: 5661788 kB' 'Inactive: 3432748 kB' 'Active(anon): 5401516 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8845048 kB' 'Mapped: 137012 kB' 'AnonPages: 249624 kB' 'Shmem: 5152028 kB' 'KernelStack: 5528 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128272 kB' 'Slab: 318228 kB' 'SReclaimable: 128272 kB' 'SUnreclaim: 189956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.584 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.585 node0=512 expecting 512 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:57.585 node1=1024 expecting 1024 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:57.585 00:03:57.585 real 0m1.534s 00:03:57.585 user 0m0.650s 00:03:57.585 sys 0m0.861s 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.585 11:30:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.585 ************************************ 00:03:57.585 END TEST custom_alloc 00:03:57.585 ************************************ 00:03:57.585 11:30:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.585 11:30:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.585 11:30:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.585 11:30:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.585 11:30:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.585 ************************************ 00:03:57.585 START TEST no_shrink_alloc 00:03:57.585 ************************************ 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.585 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.586 11:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.989 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.989 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.989 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.989 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.989 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.989 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.989 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.989 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.989 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.989 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.989 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.989 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.989 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.989 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.989 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.989 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.989 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632084 kB' 'MemAvailable: 33201972 kB' 'Buffers: 2704 kB' 'Cached: 9794808 kB' 'SwapCached: 0 kB' 'Active: 6832856 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443316 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543884 kB' 'Mapped: 215236 kB' 'Shmem: 5902724 kB' 'KReclaimable: 173848 kB' 'Slab: 510632 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336784 kB' 'KernelStack: 12448 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7567148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195732 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.990 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632408 kB' 'MemAvailable: 33202296 kB' 'Buffers: 2704 kB' 'Cached: 9794808 kB' 'SwapCached: 0 kB' 'Active: 6832912 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443372 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543944 kB' 'Mapped: 215632 kB' 'Shmem: 5902724 kB' 'KReclaimable: 173848 kB' 'Slab: 510620 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336772 kB' 'KernelStack: 12496 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7567164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195716 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632956 kB' 'MemAvailable: 33202844 kB' 'Buffers: 2704 kB' 'Cached: 9794828 kB' 'SwapCached: 0 kB' 'Active: 6832824 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443284 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543772 kB' 'Mapped: 215528 kB' 'Shmem: 5902744 kB' 'KReclaimable: 173848 kB' 'Slab: 510584 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336736 kB' 'KernelStack: 12480 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7567188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195700 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.992 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.994 nr_hugepages=1024 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.994 resv_hugepages=0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.994 surplus_hugepages=0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.994 anon_hugepages=0 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29632956 kB' 'MemAvailable: 33202844 kB' 'Buffers: 2704 kB' 'Cached: 9794848 kB' 'SwapCached: 0 kB' 'Active: 6832840 kB' 'Inactive: 3505248 kB' 'Active(anon): 6443300 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543776 kB' 'Mapped: 215528 kB' 'Shmem: 5902764 kB' 'KReclaimable: 173848 kB' 'Slab: 510584 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336736 kB' 'KernelStack: 12480 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7567208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195700 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:03:58.994 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.995 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20622312 kB' 'MemUsed: 3950044 kB' 'SwapCached: 0 kB' 'Active: 1170948 kB' 'Inactive: 72500 kB' 'Active(anon): 1041680 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952500 kB' 'Mapped: 78408 kB' 'AnonPages: 294104 kB' 'Shmem: 750732 kB' 'KernelStack: 6952 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192368 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.997 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.998 node0=1024 expecting 1024 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.998 11:30:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.376 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.376 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.376 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.376 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.377 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.377 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.377 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.377 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.377 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.377 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.377 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.377 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.377 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.377 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.377 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.377 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.377 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.377 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29601800 kB' 'MemAvailable: 33171688 kB' 'Buffers: 2704 kB' 'Cached: 9794912 kB' 'SwapCached: 0 kB' 'Active: 6836804 kB' 'Inactive: 3505248 kB' 'Active(anon): 6447264 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547252 kB' 'Mapped: 215588 kB' 'Shmem: 5902828 kB' 'KReclaimable: 173848 kB' 'Slab: 510108 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336260 kB' 'KernelStack: 12736 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7569524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.377 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29615392 kB' 'MemAvailable: 33185280 kB' 'Buffers: 2704 kB' 'Cached: 9794912 kB' 'SwapCached: 0 kB' 'Active: 6836376 kB' 'Inactive: 3505248 kB' 'Active(anon): 6446836 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546800 kB' 'Mapped: 215664 kB' 'Shmem: 5902828 kB' 'KReclaimable: 173848 kB' 'Slab: 510152 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336304 kB' 'KernelStack: 13024 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7569540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.378 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29617776 kB' 'MemAvailable: 33187664 kB' 'Buffers: 2704 kB' 'Cached: 9794916 kB' 'SwapCached: 0 kB' 'Active: 6835044 kB' 'Inactive: 3505248 kB' 'Active(anon): 6445504 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545828 kB' 'Mapped: 215660 kB' 'Shmem: 5902832 kB' 'KReclaimable: 173848 kB' 'Slab: 510152 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336304 kB' 'KernelStack: 12688 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7568200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195748 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.379 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.380 nr_hugepages=1024 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.380 resv_hugepages=0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.380 surplus_hugepages=0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.380 anon_hugepages=0 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29618340 kB' 'MemAvailable: 33188228 kB' 'Buffers: 2704 kB' 'Cached: 9794936 kB' 'SwapCached: 0 kB' 'Active: 6828968 kB' 'Inactive: 3505248 kB' 'Active(anon): 6439428 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540300 kB' 'Mapped: 215140 kB' 'Shmem: 5902852 kB' 'KReclaimable: 173848 kB' 'Slab: 510148 kB' 'SReclaimable: 173848 kB' 'SUnreclaim: 336300 kB' 'KernelStack: 12576 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7563124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1576540 kB' 'DirectMap2M: 14071808 kB' 'DirectMap1G: 36700160 kB' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.380 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20627308 kB' 'MemUsed: 3945048 kB' 'SwapCached: 0 kB' 'Active: 1174496 kB' 'Inactive: 72500 kB' 'Active(anon): 1045228 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 952564 kB' 'Mapped: 78408 kB' 'AnonPages: 297660 kB' 'Shmem: 750796 kB' 'KernelStack: 7160 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 45576 kB' 'Slab: 192080 kB' 'SReclaimable: 45576 kB' 'SUnreclaim: 146504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.381 node0=1024 expecting 1024 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.381 00:04:00.381 real 0m2.889s 00:04:00.381 user 0m1.160s 00:04:00.381 sys 0m1.679s 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.381 11:30:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.381 ************************************ 00:04:00.381 END TEST no_shrink_alloc 00:04:00.381 ************************************ 00:04:00.381 11:30:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.381 11:30:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.381 00:04:00.381 real 0m11.590s 00:04:00.381 user 0m4.459s 00:04:00.381 sys 0m6.137s 00:04:00.381 11:30:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.381 11:30:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.381 ************************************ 00:04:00.381 END TEST hugepages 00:04:00.381 ************************************ 00:04:00.640 11:30:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:00.640 11:30:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.640 11:30:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.640 11:30:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.640 11:30:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.640 ************************************ 00:04:00.640 START TEST driver 00:04:00.640 ************************************ 00:04:00.640 11:30:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.640 * Looking for test storage... 00:04:00.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.640 11:30:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:00.640 11:30:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.640 11:30:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.171 11:30:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.171 11:30:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.171 11:30:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.171 11:30:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.171 ************************************ 00:04:03.171 START TEST guess_driver 00:04:03.171 ************************************ 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:03.171 11:30:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:03.171 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:03.171 Looking for driver=vfio-pci 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.171 11:30:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.548 11:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.486 11:30:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.017 00:04:08.017 real 0m5.009s 00:04:08.017 user 0m1.150s 00:04:08.017 sys 0m1.936s 00:04:08.017 11:30:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.017 11:30:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.017 ************************************ 00:04:08.017 END TEST guess_driver 00:04:08.017 ************************************ 00:04:08.276 11:30:16 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:08.276 00:04:08.276 real 0m7.624s 00:04:08.276 user 0m1.743s 00:04:08.276 sys 0m2.931s 00:04:08.276 11:30:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.276 11:30:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 ************************************ 00:04:08.276 END TEST driver 00:04:08.276 ************************************ 00:04:08.276 11:30:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.276 11:30:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:08.276 11:30:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.276 11:30:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.276 11:30:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 ************************************ 00:04:08.276 START TEST devices 00:04:08.276 ************************************ 00:04:08.276 11:30:16 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:08.276 * Looking for test storage... 00:04:08.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.276 11:30:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.276 11:30:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:08.276 11:30:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.276 11:30:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:10.181 11:30:17 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:10.181 No valid GPT data, bailing 00:04:10.181 11:30:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:10.181 11:30:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:10.181 11:30:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.181 11:30:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.181 ************************************ 00:04:10.181 START TEST nvme_mount 00:04:10.181 ************************************ 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.181 11:30:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.792 Creating new GPT entries in memory. 00:04:10.792 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.792 other utilities. 00:04:10.792 11:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.792 11:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.792 11:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.792 11:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.792 11:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:12.171 Creating new GPT entries in memory. 00:04:12.171 The operation has completed successfully. 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2891873 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.171 11:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.110 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:13.111 11:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.368 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.368 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.368 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.368 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.368 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.369 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.369 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.628 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:13.628 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:13.628 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.628 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.628 11:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.003 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:15.004 11:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.004 11:30:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.004 11:30:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.375 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:16.376 11:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.376 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.376 00:04:16.376 real 0m6.424s 00:04:16.376 user 0m1.504s 00:04:16.376 sys 0m2.523s 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.376 11:30:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:16.376 ************************************ 00:04:16.376 END TEST nvme_mount 00:04:16.376 ************************************ 00:04:16.376 11:30:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:16.376 11:30:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:16.376 11:30:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.376 11:30:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.376 11:30:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.376 ************************************ 00:04:16.376 START TEST dm_mount 00:04:16.376 ************************************ 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.376 11:30:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.309 Creating new GPT entries in memory. 00:04:17.309 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.309 other utilities. 00:04:17.309 11:30:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.309 11:30:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.309 11:30:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.309 11:30:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.309 11:30:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.688 Creating new GPT entries in memory. 00:04:18.688 The operation has completed successfully. 00:04:18.688 11:30:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.688 11:30:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.688 11:30:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.688 11:30:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.688 11:30:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:19.647 The operation has completed successfully. 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2894284 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.647 11:30:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:20.584 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.843 11:30:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:21.817 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:22.076 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.076 11:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:22.076 00:04:22.076 real 0m5.789s 00:04:22.076 user 0m1.008s 00:04:22.076 sys 0m1.674s 00:04:22.076 11:30:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.076 11:30:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:22.076 ************************************ 00:04:22.076 END TEST dm_mount 00:04:22.076 ************************************ 00:04:22.076 11:30:30 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.076 11:30:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.334 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.334 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.334 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.334 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.334 11:30:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.334 00:04:22.334 real 0m14.234s 00:04:22.334 user 0m3.171s 00:04:22.334 sys 0m5.337s 00:04:22.334 11:30:30 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.334 11:30:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.334 ************************************ 00:04:22.334 END TEST devices 00:04:22.334 ************************************ 00:04:22.591 11:30:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.591 00:04:22.591 real 0m44.700s 00:04:22.591 user 0m12.789s 00:04:22.591 sys 0m20.217s 00:04:22.591 11:30:30 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.591 11:30:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.591 ************************************ 00:04:22.591 END TEST setup.sh 00:04:22.591 ************************************ 00:04:22.591 11:30:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.591 11:30:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:23.963 Hugepages 00:04:23.963 node hugesize free / total 00:04:23.963 node0 1048576kB 0 / 0 00:04:23.963 node0 2048kB 2048 / 2048 00:04:23.963 node1 1048576kB 0 / 0 00:04:23.963 node1 2048kB 0 / 0 00:04:23.963 00:04:23.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.963 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:23.963 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:23.963 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:23.963 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:23.963 11:30:31 -- spdk/autotest.sh@130 -- # uname -s 00:04:23.963 11:30:31 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:23.963 11:30:31 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:23.963 11:30:31 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.336 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:25.336 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:25.336 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:26.272 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.272 11:30:34 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:27.206 11:30:35 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:27.206 11:30:35 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:27.206 11:30:35 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.206 11:30:35 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:27.206 11:30:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:27.206 11:30:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:27.206 11:30:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.206 11:30:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.206 11:30:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:27.464 11:30:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:27.464 11:30:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:27.464 11:30:35 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.399 Waiting for block devices as requested 00:04:28.657 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:28.657 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:28.915 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:28.915 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:28.915 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:29.173 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:29.173 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:29.173 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:29.173 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:29.432 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:29.432 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:29.432 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:29.432 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:29.690 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:29.690 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:29.690 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:29.949 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:29.949 11:30:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:29.949 11:30:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:04:29.949 11:30:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:04:29.949 11:30:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:29.949 11:30:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:29.949 11:30:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:29.949 11:30:37 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:29.949 11:30:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:29.949 11:30:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:29.949 11:30:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:29.949 11:30:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:29.949 11:30:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:29.949 11:30:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:29.949 11:30:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:29.949 11:30:37 -- common/autotest_common.sh@1557 -- # continue 00:04:29.949 11:30:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:29.949 11:30:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.949 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.949 11:30:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:29.949 11:30:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.949 11:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.949 11:30:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.326 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.326 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.326 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.261 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.518 11:30:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:32.518 11:30:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.518 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:04:32.518 11:30:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:32.518 11:30:40 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:32.518 11:30:40 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.518 11:30:40 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:32.518 11:30:40 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:32.518 11:30:40 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:32.518 11:30:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:32.518 11:30:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:32.518 11:30:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.518 11:30:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.518 11:30:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.518 11:30:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:32.518 11:30:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:32.518 11:30:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:32.518 11:30:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:04:32.518 11:30:40 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:32.518 11:30:40 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:32.518 11:30:40 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:32.518 11:30:40 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:04:32.518 11:30:40 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:04:32.518 11:30:40 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2899619 00:04:32.518 11:30:40 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.518 11:30:40 -- common/autotest_common.sh@1598 -- # waitforlisten 2899619 00:04:32.518 11:30:40 -- common/autotest_common.sh@829 -- # '[' -z 2899619 ']' 00:04:32.518 11:30:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.518 11:30:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.518 11:30:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.518 11:30:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.518 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:04:32.518 [2024-07-15 11:30:40.416890] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:32.518 [2024-07-15 11:30:40.416982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899619 ] 00:04:32.518 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.518 [2024-07-15 11:30:40.475113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.775 [2024-07-15 11:30:40.586845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.039 11:30:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.040 11:30:40 -- common/autotest_common.sh@862 -- # return 0 00:04:33.040 11:30:40 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:33.040 11:30:40 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:33.040 11:30:40 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:04:36.358 nvme0n1 00:04:36.358 11:30:43 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:36.358 [2024-07-15 11:30:44.132237] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:36.358 [2024-07-15 11:30:44.132282] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:36.358 request: 00:04:36.358 { 00:04:36.358 "nvme_ctrlr_name": "nvme0", 00:04:36.358 "password": "test", 00:04:36.358 "method": "bdev_nvme_opal_revert", 00:04:36.358 "req_id": 1 00:04:36.358 } 00:04:36.358 Got JSON-RPC error response 00:04:36.358 response: 00:04:36.358 { 00:04:36.358 "code": -32603, 00:04:36.358 "message": "Internal error" 00:04:36.358 } 00:04:36.358 11:30:44 -- common/autotest_common.sh@1604 -- # true 00:04:36.358 11:30:44 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:36.358 11:30:44 -- common/autotest_common.sh@1608 -- # killprocess 2899619 00:04:36.358 11:30:44 -- common/autotest_common.sh@948 -- # '[' -z 2899619 ']' 00:04:36.358 11:30:44 -- common/autotest_common.sh@952 -- # kill -0 2899619 00:04:36.358 11:30:44 -- common/autotest_common.sh@953 -- # uname 00:04:36.358 11:30:44 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.358 11:30:44 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2899619 00:04:36.358 11:30:44 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.358 11:30:44 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.358 11:30:44 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2899619' 00:04:36.358 killing process with pid 2899619 00:04:36.358 11:30:44 -- common/autotest_common.sh@967 -- # kill 2899619 00:04:36.358 11:30:44 -- common/autotest_common.sh@972 -- # wait 2899619 00:04:38.283 11:30:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:38.283 11:30:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:38.283 11:30:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.283 11:30:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:38.283 11:30:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:38.283 11:30:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.283 11:30:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.283 11:30:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:38.283 11:30:45 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:38.283 11:30:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.283 11:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.283 11:30:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.283 ************************************ 00:04:38.283 START TEST env 00:04:38.283 ************************************ 00:04:38.283 11:30:45 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:38.283 * Looking for test storage... 00:04:38.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:38.283 11:30:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.283 11:30:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.283 11:30:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.283 11:30:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.283 ************************************ 00:04:38.283 START TEST env_memory 00:04:38.283 ************************************ 00:04:38.283 11:30:46 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:38.283 00:04:38.283 00:04:38.283 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.283 http://cunit.sourceforge.net/ 00:04:38.283 00:04:38.283 00:04:38.283 Suite: memory 00:04:38.283 Test: alloc and free memory map ...[2024-07-15 11:30:46.061234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.283 passed 00:04:38.283 Test: mem map translation ...[2024-07-15 11:30:46.082545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.283 [2024-07-15 11:30:46.082569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.283 [2024-07-15 11:30:46.082612] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.283 [2024-07-15 11:30:46.082624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.283 passed 00:04:38.283 Test: mem map registration ...[2024-07-15 11:30:46.125867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:38.283 [2024-07-15 11:30:46.125890] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:38.283 passed 00:04:38.283 Test: mem map adjacent registrations ...passed 00:04:38.283 00:04:38.284 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.284 suites 1 1 n/a 0 0 00:04:38.284 tests 4 4 4 0 0 00:04:38.284 asserts 152 152 152 0 n/a 00:04:38.284 00:04:38.284 Elapsed time = 0.145 seconds 00:04:38.284 00:04:38.284 real 0m0.153s 00:04:38.284 user 0m0.141s 00:04:38.284 sys 0m0.012s 00:04:38.284 11:30:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.284 11:30:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:38.284 ************************************ 00:04:38.284 END TEST env_memory 00:04:38.284 ************************************ 00:04:38.284 11:30:46 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.284 11:30:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.284 11:30:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.284 11:30:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.284 11:30:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.284 ************************************ 00:04:38.284 START TEST env_vtophys 00:04:38.284 ************************************ 00:04:38.284 11:30:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:38.284 EAL: lib.eal log level changed from notice to debug 00:04:38.284 EAL: Detected lcore 0 as core 0 on socket 0 00:04:38.284 EAL: Detected lcore 1 as core 1 on socket 0 00:04:38.284 EAL: Detected lcore 2 as core 2 on socket 0 00:04:38.284 EAL: Detected lcore 3 as core 3 on socket 0 00:04:38.284 EAL: Detected lcore 4 as core 4 on socket 0 00:04:38.284 EAL: Detected lcore 5 as core 5 on socket 0 00:04:38.284 EAL: Detected lcore 6 as core 8 on socket 0 00:04:38.284 EAL: Detected lcore 7 as core 9 on socket 0 00:04:38.284 EAL: Detected lcore 8 as core 10 on socket 0 00:04:38.284 EAL: Detected lcore 9 as core 11 on socket 0 00:04:38.284 EAL: Detected lcore 10 as core 12 on socket 0 00:04:38.284 EAL: Detected lcore 11 as core 13 on socket 0 00:04:38.284 EAL: Detected lcore 12 as core 0 on socket 1 00:04:38.284 EAL: Detected lcore 13 as core 1 on socket 1 00:04:38.284 EAL: Detected lcore 14 as core 2 on socket 1 00:04:38.284 EAL: Detected lcore 15 as core 3 on socket 1 00:04:38.284 EAL: Detected lcore 16 as core 4 on socket 1 00:04:38.284 EAL: Detected lcore 17 as core 5 on socket 1 00:04:38.284 EAL: Detected lcore 18 as core 8 on socket 1 00:04:38.284 EAL: Detected lcore 19 as core 9 on socket 1 00:04:38.284 EAL: Detected lcore 20 as core 10 on socket 1 00:04:38.284 EAL: Detected lcore 21 as core 11 on socket 1 00:04:38.284 EAL: Detected lcore 22 as core 12 on socket 1 00:04:38.284 EAL: Detected lcore 23 as core 13 on socket 1 00:04:38.284 EAL: Detected lcore 24 as core 0 on socket 0 00:04:38.284 EAL: Detected lcore 25 as core 1 on socket 0 00:04:38.284 EAL: Detected lcore 26 as core 2 on socket 0 00:04:38.284 EAL: Detected lcore 27 as core 3 on socket 0 00:04:38.284 EAL: Detected lcore 28 as core 4 on socket 0 00:04:38.284 EAL: Detected lcore 29 as core 5 on socket 0 00:04:38.284 EAL: Detected lcore 30 as core 8 on socket 0 00:04:38.284 EAL: Detected lcore 31 as core 9 on socket 0 00:04:38.284 EAL: Detected lcore 32 as core 10 on socket 0 00:04:38.284 EAL: Detected lcore 33 as core 11 on socket 0 00:04:38.284 EAL: Detected lcore 34 as core 12 on socket 0 00:04:38.284 EAL: Detected lcore 35 as core 13 on socket 0 00:04:38.284 EAL: Detected lcore 36 as core 0 on socket 1 00:04:38.284 EAL: Detected lcore 37 as core 1 on socket 1 00:04:38.284 EAL: Detected lcore 38 as core 2 on socket 1 00:04:38.284 EAL: Detected lcore 39 as core 3 on socket 1 00:04:38.284 EAL: Detected lcore 40 as core 4 on socket 1 00:04:38.284 EAL: Detected lcore 41 as core 5 on socket 1 00:04:38.284 EAL: Detected lcore 42 as core 8 on socket 1 00:04:38.284 EAL: Detected lcore 43 as core 9 on socket 1 00:04:38.284 EAL: Detected lcore 44 as core 10 on socket 1 00:04:38.284 EAL: Detected lcore 45 as core 11 on socket 1 00:04:38.284 EAL: Detected lcore 46 as core 12 on socket 1 00:04:38.284 EAL: Detected lcore 47 as core 13 on socket 1 00:04:38.284 EAL: Maximum logical cores by configuration: 128 00:04:38.284 EAL: Detected CPU lcores: 48 00:04:38.284 EAL: Detected NUMA nodes: 2 00:04:38.284 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:38.284 EAL: Detected shared linkage of DPDK 00:04:38.284 EAL: No shared files mode enabled, IPC will be disabled 00:04:38.543 EAL: Bus pci wants IOVA as 'DC' 00:04:38.543 EAL: Buses did not request a specific IOVA mode. 00:04:38.543 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:38.543 EAL: Selected IOVA mode 'VA' 00:04:38.543 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.543 EAL: Probing VFIO support... 00:04:38.543 EAL: IOMMU type 1 (Type 1) is supported 00:04:38.543 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:38.543 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:38.543 EAL: VFIO support initialized 00:04:38.543 EAL: Ask a virtual area of 0x2e000 bytes 00:04:38.543 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:38.543 EAL: Setting up physically contiguous memory... 00:04:38.543 EAL: Setting maximum number of open files to 524288 00:04:38.543 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:38.543 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:38.543 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:38.543 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:38.543 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.543 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:38.543 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:38.543 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.543 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:38.543 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:38.543 EAL: Hugepages will be freed exactly as allocated. 00:04:38.543 EAL: No shared files mode enabled, IPC is disabled 00:04:38.543 EAL: No shared files mode enabled, IPC is disabled 00:04:38.543 EAL: TSC frequency is ~2700000 KHz 00:04:38.543 EAL: Main lcore 0 is ready (tid=7f1528e18a00;cpuset=[0]) 00:04:38.543 EAL: Trying to obtain current memory policy. 00:04:38.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.543 EAL: Restoring previous memory policy: 0 00:04:38.543 EAL: request: mp_malloc_sync 00:04:38.543 EAL: No shared files mode enabled, IPC is disabled 00:04:38.543 EAL: Heap on socket 0 was expanded by 2MB 00:04:38.543 EAL: No shared files mode enabled, IPC is disabled 00:04:38.543 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:38.543 EAL: Mem event callback 'spdk:(nil)' registered 00:04:38.544 00:04:38.544 00:04:38.544 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.544 http://cunit.sourceforge.net/ 00:04:38.544 00:04:38.544 00:04:38.544 Suite: components_suite 00:04:38.544 Test: vtophys_malloc_test ...passed 00:04:38.544 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 4MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 4MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 6MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 6MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 10MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 10MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 18MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 18MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 34MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 34MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 66MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 66MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 130MB 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was shrunk by 130MB 00:04:38.544 EAL: Trying to obtain current memory policy. 00:04:38.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.544 EAL: Restoring previous memory policy: 4 00:04:38.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.544 EAL: request: mp_malloc_sync 00:04:38.544 EAL: No shared files mode enabled, IPC is disabled 00:04:38.544 EAL: Heap on socket 0 was expanded by 258MB 00:04:38.802 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.802 EAL: request: mp_malloc_sync 00:04:38.802 EAL: No shared files mode enabled, IPC is disabled 00:04:38.802 EAL: Heap on socket 0 was shrunk by 258MB 00:04:38.802 EAL: Trying to obtain current memory policy. 00:04:38.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.802 EAL: Restoring previous memory policy: 4 00:04:38.802 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.802 EAL: request: mp_malloc_sync 00:04:38.802 EAL: No shared files mode enabled, IPC is disabled 00:04:38.802 EAL: Heap on socket 0 was expanded by 514MB 00:04:39.060 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.060 EAL: request: mp_malloc_sync 00:04:39.060 EAL: No shared files mode enabled, IPC is disabled 00:04:39.060 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.060 EAL: Trying to obtain current memory policy. 00:04:39.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.319 EAL: Restoring previous memory policy: 4 00:04:39.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.319 EAL: request: mp_malloc_sync 00:04:39.319 EAL: No shared files mode enabled, IPC is disabled 00:04:39.319 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.835 EAL: request: mp_malloc_sync 00:04:39.835 EAL: No shared files mode enabled, IPC is disabled 00:04:39.835 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:39.835 passed 00:04:39.835 00:04:39.835 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.835 suites 1 1 n/a 0 0 00:04:39.835 tests 2 2 2 0 0 00:04:39.835 asserts 497 497 497 0 n/a 00:04:39.835 00:04:39.835 Elapsed time = 1.311 seconds 00:04:39.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.835 EAL: request: mp_malloc_sync 00:04:39.835 EAL: No shared files mode enabled, IPC is disabled 00:04:39.835 EAL: Heap on socket 0 was shrunk by 2MB 00:04:39.835 EAL: No shared files mode enabled, IPC is disabled 00:04:39.835 EAL: No shared files mode enabled, IPC is disabled 00:04:39.835 EAL: No shared files mode enabled, IPC is disabled 00:04:39.835 00:04:39.835 real 0m1.422s 00:04:39.835 user 0m0.836s 00:04:39.835 sys 0m0.556s 00:04:39.835 11:30:47 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.835 11:30:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:39.835 ************************************ 00:04:39.835 END TEST env_vtophys 00:04:39.835 ************************************ 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:39.835 11:30:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.835 11:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.835 ************************************ 00:04:39.835 START TEST env_pci 00:04:39.835 ************************************ 00:04:39.835 11:30:47 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:39.835 00:04:39.835 00:04:39.835 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.835 http://cunit.sourceforge.net/ 00:04:39.835 00:04:39.835 00:04:39.835 Suite: pci 00:04:39.835 Test: pci_hook ...[2024-07-15 11:30:47.716285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2900513 has claimed it 00:04:39.835 EAL: Cannot find device (10000:00:01.0) 00:04:39.835 EAL: Failed to attach device on primary process 00:04:39.835 passed 00:04:39.835 00:04:39.835 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.835 suites 1 1 n/a 0 0 00:04:39.835 tests 1 1 1 0 0 00:04:39.835 asserts 25 25 25 0 n/a 00:04:39.835 00:04:39.835 Elapsed time = 0.022 seconds 00:04:39.835 00:04:39.835 real 0m0.034s 00:04:39.835 user 0m0.012s 00:04:39.835 sys 0m0.022s 00:04:39.835 11:30:47 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.835 11:30:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:39.835 ************************************ 00:04:39.835 END TEST env_pci 00:04:39.835 ************************************ 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:39.835 11:30:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:39.835 11:30:47 env -- env/env.sh@15 -- # uname 00:04:39.835 11:30:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:39.835 11:30:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:39.835 11:30:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:39.835 11:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.835 11:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.835 ************************************ 00:04:39.835 START TEST env_dpdk_post_init 00:04:39.835 ************************************ 00:04:39.835 11:30:47 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.835 EAL: Detected CPU lcores: 48 00:04:39.835 EAL: Detected NUMA nodes: 2 00:04:40.094 EAL: Detected shared linkage of DPDK 00:04:40.094 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:40.094 EAL: Selected IOVA mode 'VA' 00:04:40.094 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.094 EAL: VFIO support initialized 00:04:40.095 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:40.095 EAL: Using IOMMU type 1 (Type 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:40.095 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:40.355 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:40.355 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:40.924 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:04:44.202 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:04:44.202 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:04:44.462 Starting DPDK initialization... 00:04:44.462 Starting SPDK post initialization... 00:04:44.462 SPDK NVMe probe 00:04:44.462 Attaching to 0000:82:00.0 00:04:44.462 Attached to 0000:82:00.0 00:04:44.462 Cleaning up... 00:04:44.462 00:04:44.462 real 0m4.442s 00:04:44.462 user 0m3.316s 00:04:44.462 sys 0m0.186s 00:04:44.462 11:30:52 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.462 11:30:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.462 ************************************ 00:04:44.462 END TEST env_dpdk_post_init 00:04:44.462 ************************************ 00:04:44.462 11:30:52 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.462 11:30:52 env -- env/env.sh@26 -- # uname 00:04:44.462 11:30:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.462 11:30:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.462 11:30:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.462 11:30:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.462 11:30:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.462 ************************************ 00:04:44.462 START TEST env_mem_callbacks 00:04:44.462 ************************************ 00:04:44.462 11:30:52 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.462 EAL: Detected CPU lcores: 48 00:04:44.462 EAL: Detected NUMA nodes: 2 00:04:44.462 EAL: Detected shared linkage of DPDK 00:04:44.462 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.462 EAL: Selected IOVA mode 'VA' 00:04:44.462 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.462 EAL: VFIO support initialized 00:04:44.462 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.462 00:04:44.462 00:04:44.462 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.462 http://cunit.sourceforge.net/ 00:04:44.462 00:04:44.462 00:04:44.462 Suite: memory 00:04:44.462 Test: test ... 00:04:44.462 register 0x200000200000 2097152 00:04:44.462 malloc 3145728 00:04:44.462 register 0x200000400000 4194304 00:04:44.462 buf 0x200000500000 len 3145728 PASSED 00:04:44.462 malloc 64 00:04:44.462 buf 0x2000004fff40 len 64 PASSED 00:04:44.462 malloc 4194304 00:04:44.462 register 0x200000800000 6291456 00:04:44.462 buf 0x200000a00000 len 4194304 PASSED 00:04:44.462 free 0x200000500000 3145728 00:04:44.462 free 0x2000004fff40 64 00:04:44.462 unregister 0x200000400000 4194304 PASSED 00:04:44.462 free 0x200000a00000 4194304 00:04:44.462 unregister 0x200000800000 6291456 PASSED 00:04:44.462 malloc 8388608 00:04:44.462 register 0x200000400000 10485760 00:04:44.462 buf 0x200000600000 len 8388608 PASSED 00:04:44.462 free 0x200000600000 8388608 00:04:44.462 unregister 0x200000400000 10485760 PASSED 00:04:44.462 passed 00:04:44.462 00:04:44.462 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.462 suites 1 1 n/a 0 0 00:04:44.462 tests 1 1 1 0 0 00:04:44.462 asserts 15 15 15 0 n/a 00:04:44.462 00:04:44.462 Elapsed time = 0.004 seconds 00:04:44.462 00:04:44.462 real 0m0.048s 00:04:44.462 user 0m0.012s 00:04:44.462 sys 0m0.036s 00:04:44.462 11:30:52 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.462 11:30:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.462 ************************************ 00:04:44.462 END TEST env_mem_callbacks 00:04:44.462 ************************************ 00:04:44.462 11:30:52 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.462 00:04:44.462 real 0m6.408s 00:04:44.462 user 0m4.433s 00:04:44.462 sys 0m1.021s 00:04:44.462 11:30:52 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.462 11:30:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.462 ************************************ 00:04:44.462 END TEST env 00:04:44.462 ************************************ 00:04:44.462 11:30:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.462 11:30:52 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.462 11:30:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.462 11:30:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.462 11:30:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.462 ************************************ 00:04:44.462 START TEST rpc 00:04:44.463 ************************************ 00:04:44.463 11:30:52 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:44.721 * Looking for test storage... 00:04:44.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.721 11:30:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2901170 00:04:44.721 11:30:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:44.721 11:30:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.721 11:30:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2901170 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@829 -- # '[' -z 2901170 ']' 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.721 11:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.721 [2024-07-15 11:30:52.510067] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:44.721 [2024-07-15 11:30:52.510146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901170 ] 00:04:44.721 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.721 [2024-07-15 11:30:52.566550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.721 [2024-07-15 11:30:52.674058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.721 [2024-07-15 11:30:52.674122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2901170' to capture a snapshot of events at runtime. 00:04:44.721 [2024-07-15 11:30:52.674135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.721 [2024-07-15 11:30:52.674146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.721 [2024-07-15 11:30:52.674155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2901170 for offline analysis/debug. 00:04:44.721 [2024-07-15 11:30:52.674187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.979 11:30:52 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.979 11:30:52 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.979 11:30:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.979 11:30:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.979 11:30:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.979 11:30:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.979 11:30:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.979 11:30:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.979 11:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.979 ************************************ 00:04:44.979 START TEST rpc_integrity 00:04:44.979 ************************************ 00:04:44.979 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:44.979 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.979 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.979 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.979 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.979 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.979 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.237 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.237 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.237 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.237 11:30:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.237 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.237 { 00:04:45.237 "name": "Malloc0", 00:04:45.237 "aliases": [ 00:04:45.237 "796dba38-6f86-4978-811a-fa26ec2f4aa9" 00:04:45.237 ], 00:04:45.237 "product_name": "Malloc disk", 00:04:45.237 "block_size": 512, 00:04:45.237 "num_blocks": 16384, 00:04:45.237 "uuid": "796dba38-6f86-4978-811a-fa26ec2f4aa9", 00:04:45.237 "assigned_rate_limits": { 00:04:45.237 "rw_ios_per_sec": 0, 00:04:45.237 "rw_mbytes_per_sec": 0, 00:04:45.237 "r_mbytes_per_sec": 0, 00:04:45.237 "w_mbytes_per_sec": 0 00:04:45.237 }, 00:04:45.237 "claimed": false, 00:04:45.237 "zoned": false, 00:04:45.237 "supported_io_types": { 00:04:45.237 "read": true, 00:04:45.237 "write": true, 00:04:45.237 "unmap": true, 00:04:45.237 "flush": true, 00:04:45.237 "reset": true, 00:04:45.237 "nvme_admin": false, 00:04:45.237 "nvme_io": false, 00:04:45.237 "nvme_io_md": false, 00:04:45.237 "write_zeroes": true, 00:04:45.237 "zcopy": true, 00:04:45.237 "get_zone_info": false, 00:04:45.237 "zone_management": false, 00:04:45.237 "zone_append": false, 00:04:45.237 "compare": false, 00:04:45.237 "compare_and_write": false, 00:04:45.237 "abort": true, 00:04:45.237 "seek_hole": false, 00:04:45.237 "seek_data": false, 00:04:45.237 "copy": true, 00:04:45.237 "nvme_iov_md": false 00:04:45.237 }, 00:04:45.237 "memory_domains": [ 00:04:45.237 { 00:04:45.237 "dma_device_id": "system", 00:04:45.237 "dma_device_type": 1 00:04:45.237 }, 00:04:45.237 { 00:04:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.237 "dma_device_type": 2 00:04:45.237 } 00:04:45.237 ], 00:04:45.237 "driver_specific": {} 00:04:45.237 } 00:04:45.237 ]' 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 [2024-07-15 11:30:53.037373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.237 [2024-07-15 11:30:53.037410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.237 [2024-07-15 11:30:53.037429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5583e0 00:04:45.237 [2024-07-15 11:30:53.037441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.237 [2024-07-15 11:30:53.038639] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.237 [2024-07-15 11:30:53.038661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.237 Passthru0 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.237 { 00:04:45.237 "name": "Malloc0", 00:04:45.237 "aliases": [ 00:04:45.237 "796dba38-6f86-4978-811a-fa26ec2f4aa9" 00:04:45.237 ], 00:04:45.237 "product_name": "Malloc disk", 00:04:45.237 "block_size": 512, 00:04:45.237 "num_blocks": 16384, 00:04:45.237 "uuid": "796dba38-6f86-4978-811a-fa26ec2f4aa9", 00:04:45.237 "assigned_rate_limits": { 00:04:45.237 "rw_ios_per_sec": 0, 00:04:45.237 "rw_mbytes_per_sec": 0, 00:04:45.237 "r_mbytes_per_sec": 0, 00:04:45.237 "w_mbytes_per_sec": 0 00:04:45.237 }, 00:04:45.237 "claimed": true, 00:04:45.237 "claim_type": "exclusive_write", 00:04:45.237 "zoned": false, 00:04:45.237 "supported_io_types": { 00:04:45.237 "read": true, 00:04:45.237 "write": true, 00:04:45.237 "unmap": true, 00:04:45.237 "flush": true, 00:04:45.237 "reset": true, 00:04:45.237 "nvme_admin": false, 00:04:45.237 "nvme_io": false, 00:04:45.237 "nvme_io_md": false, 00:04:45.237 "write_zeroes": true, 00:04:45.237 "zcopy": true, 00:04:45.237 "get_zone_info": false, 00:04:45.237 "zone_management": false, 00:04:45.237 "zone_append": false, 00:04:45.237 "compare": false, 00:04:45.237 "compare_and_write": false, 00:04:45.237 "abort": true, 00:04:45.237 "seek_hole": false, 00:04:45.237 "seek_data": false, 00:04:45.237 "copy": true, 00:04:45.237 "nvme_iov_md": false 00:04:45.237 }, 00:04:45.237 "memory_domains": [ 00:04:45.237 { 00:04:45.237 "dma_device_id": "system", 00:04:45.237 "dma_device_type": 1 00:04:45.237 }, 00:04:45.237 { 00:04:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.237 "dma_device_type": 2 00:04:45.237 } 00:04:45.237 ], 00:04:45.237 "driver_specific": {} 00:04:45.237 }, 00:04:45.237 { 00:04:45.237 "name": "Passthru0", 00:04:45.237 "aliases": [ 00:04:45.237 "ff4ab8ba-d69e-5128-81ef-4f45606994c2" 00:04:45.237 ], 00:04:45.237 "product_name": "passthru", 00:04:45.237 "block_size": 512, 00:04:45.237 "num_blocks": 16384, 00:04:45.237 "uuid": "ff4ab8ba-d69e-5128-81ef-4f45606994c2", 00:04:45.237 "assigned_rate_limits": { 00:04:45.237 "rw_ios_per_sec": 0, 00:04:45.237 "rw_mbytes_per_sec": 0, 00:04:45.237 "r_mbytes_per_sec": 0, 00:04:45.237 "w_mbytes_per_sec": 0 00:04:45.237 }, 00:04:45.237 "claimed": false, 00:04:45.237 "zoned": false, 00:04:45.237 "supported_io_types": { 00:04:45.237 "read": true, 00:04:45.237 "write": true, 00:04:45.237 "unmap": true, 00:04:45.237 "flush": true, 00:04:45.237 "reset": true, 00:04:45.237 "nvme_admin": false, 00:04:45.237 "nvme_io": false, 00:04:45.237 "nvme_io_md": false, 00:04:45.237 "write_zeroes": true, 00:04:45.237 "zcopy": true, 00:04:45.237 "get_zone_info": false, 00:04:45.237 "zone_management": false, 00:04:45.237 "zone_append": false, 00:04:45.237 "compare": false, 00:04:45.237 "compare_and_write": false, 00:04:45.237 "abort": true, 00:04:45.237 "seek_hole": false, 00:04:45.237 "seek_data": false, 00:04:45.237 "copy": true, 00:04:45.237 "nvme_iov_md": false 00:04:45.237 }, 00:04:45.237 "memory_domains": [ 00:04:45.237 { 00:04:45.237 "dma_device_id": "system", 00:04:45.237 "dma_device_type": 1 00:04:45.237 }, 00:04:45.237 { 00:04:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.237 "dma_device_type": 2 00:04:45.237 } 00:04:45.237 ], 00:04:45.237 "driver_specific": { 00:04:45.237 "passthru": { 00:04:45.237 "name": "Passthru0", 00:04:45.237 "base_bdev_name": "Malloc0" 00:04:45.237 } 00:04:45.237 } 00:04:45.237 } 00:04:45.237 ]' 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.237 11:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.237 00:04:45.237 real 0m0.213s 00:04:45.237 user 0m0.141s 00:04:45.237 sys 0m0.012s 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 ************************************ 00:04:45.237 END TEST rpc_integrity 00:04:45.237 ************************************ 00:04:45.237 11:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.237 11:30:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.237 11:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.237 11:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.237 11:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.237 ************************************ 00:04:45.237 START TEST rpc_plugins 00:04:45.237 ************************************ 00:04:45.237 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:45.237 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.237 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.237 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.238 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.238 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.238 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.238 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.238 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.238 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.238 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.238 { 00:04:45.238 "name": "Malloc1", 00:04:45.238 "aliases": [ 00:04:45.238 "ce9dd93e-f8c9-41a0-a595-c6ea3ef51f9d" 00:04:45.238 ], 00:04:45.238 "product_name": "Malloc disk", 00:04:45.238 "block_size": 4096, 00:04:45.238 "num_blocks": 256, 00:04:45.238 "uuid": "ce9dd93e-f8c9-41a0-a595-c6ea3ef51f9d", 00:04:45.238 "assigned_rate_limits": { 00:04:45.238 "rw_ios_per_sec": 0, 00:04:45.238 "rw_mbytes_per_sec": 0, 00:04:45.238 "r_mbytes_per_sec": 0, 00:04:45.238 "w_mbytes_per_sec": 0 00:04:45.238 }, 00:04:45.238 "claimed": false, 00:04:45.238 "zoned": false, 00:04:45.238 "supported_io_types": { 00:04:45.238 "read": true, 00:04:45.238 "write": true, 00:04:45.238 "unmap": true, 00:04:45.238 "flush": true, 00:04:45.238 "reset": true, 00:04:45.238 "nvme_admin": false, 00:04:45.238 "nvme_io": false, 00:04:45.238 "nvme_io_md": false, 00:04:45.238 "write_zeroes": true, 00:04:45.238 "zcopy": true, 00:04:45.238 "get_zone_info": false, 00:04:45.238 "zone_management": false, 00:04:45.238 "zone_append": false, 00:04:45.238 "compare": false, 00:04:45.238 "compare_and_write": false, 00:04:45.238 "abort": true, 00:04:45.238 "seek_hole": false, 00:04:45.238 "seek_data": false, 00:04:45.238 "copy": true, 00:04:45.238 "nvme_iov_md": false 00:04:45.238 }, 00:04:45.238 "memory_domains": [ 00:04:45.238 { 00:04:45.238 "dma_device_id": "system", 00:04:45.238 "dma_device_type": 1 00:04:45.238 }, 00:04:45.238 { 00:04:45.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.238 "dma_device_type": 2 00:04:45.238 } 00:04:45.238 ], 00:04:45.238 "driver_specific": {} 00:04:45.238 } 00:04:45.238 ]' 00:04:45.238 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.495 11:30:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.495 00:04:45.495 real 0m0.105s 00:04:45.495 user 0m0.065s 00:04:45.495 sys 0m0.009s 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.495 11:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.495 ************************************ 00:04:45.495 END TEST rpc_plugins 00:04:45.495 ************************************ 00:04:45.495 11:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.495 11:30:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.495 11:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.495 11:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.495 11:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.495 ************************************ 00:04:45.495 START TEST rpc_trace_cmd_test 00:04:45.495 ************************************ 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.495 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2901170", 00:04:45.495 "tpoint_group_mask": "0x8", 00:04:45.495 "iscsi_conn": { 00:04:45.495 "mask": "0x2", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "scsi": { 00:04:45.495 "mask": "0x4", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "bdev": { 00:04:45.495 "mask": "0x8", 00:04:45.495 "tpoint_mask": "0xffffffffffffffff" 00:04:45.495 }, 00:04:45.495 "nvmf_rdma": { 00:04:45.495 "mask": "0x10", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "nvmf_tcp": { 00:04:45.495 "mask": "0x20", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "ftl": { 00:04:45.495 "mask": "0x40", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "blobfs": { 00:04:45.495 "mask": "0x80", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "dsa": { 00:04:45.495 "mask": "0x200", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "thread": { 00:04:45.495 "mask": "0x400", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "nvme_pcie": { 00:04:45.495 "mask": "0x800", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "iaa": { 00:04:45.495 "mask": "0x1000", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "nvme_tcp": { 00:04:45.495 "mask": "0x2000", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "bdev_nvme": { 00:04:45.495 "mask": "0x4000", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 }, 00:04:45.495 "sock": { 00:04:45.495 "mask": "0x8000", 00:04:45.495 "tpoint_mask": "0x0" 00:04:45.495 } 00:04:45.495 }' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.495 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.752 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.752 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.752 11:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.752 00:04:45.752 real 0m0.181s 00:04:45.752 user 0m0.159s 00:04:45.752 sys 0m0.012s 00:04:45.752 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.752 11:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 ************************************ 00:04:45.752 END TEST rpc_trace_cmd_test 00:04:45.752 ************************************ 00:04:45.752 11:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.752 11:30:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.752 11:30:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.752 11:30:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.752 11:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.752 11:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.752 11:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 ************************************ 00:04:45.752 START TEST rpc_daemon_integrity 00:04:45.752 ************************************ 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.752 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.753 { 00:04:45.753 "name": "Malloc2", 00:04:45.753 "aliases": [ 00:04:45.753 "67f85060-b530-44aa-9d76-29ec4ac232f0" 00:04:45.753 ], 00:04:45.753 "product_name": "Malloc disk", 00:04:45.753 "block_size": 512, 00:04:45.753 "num_blocks": 16384, 00:04:45.753 "uuid": "67f85060-b530-44aa-9d76-29ec4ac232f0", 00:04:45.753 "assigned_rate_limits": { 00:04:45.753 "rw_ios_per_sec": 0, 00:04:45.753 "rw_mbytes_per_sec": 0, 00:04:45.753 "r_mbytes_per_sec": 0, 00:04:45.753 "w_mbytes_per_sec": 0 00:04:45.753 }, 00:04:45.753 "claimed": false, 00:04:45.753 "zoned": false, 00:04:45.753 "supported_io_types": { 00:04:45.753 "read": true, 00:04:45.753 "write": true, 00:04:45.753 "unmap": true, 00:04:45.753 "flush": true, 00:04:45.753 "reset": true, 00:04:45.753 "nvme_admin": false, 00:04:45.753 "nvme_io": false, 00:04:45.753 "nvme_io_md": false, 00:04:45.753 "write_zeroes": true, 00:04:45.753 "zcopy": true, 00:04:45.753 "get_zone_info": false, 00:04:45.753 "zone_management": false, 00:04:45.753 "zone_append": false, 00:04:45.753 "compare": false, 00:04:45.753 "compare_and_write": false, 00:04:45.753 "abort": true, 00:04:45.753 "seek_hole": false, 00:04:45.753 "seek_data": false, 00:04:45.753 "copy": true, 00:04:45.753 "nvme_iov_md": false 00:04:45.753 }, 00:04:45.753 "memory_domains": [ 00:04:45.753 { 00:04:45.753 "dma_device_id": "system", 00:04:45.753 "dma_device_type": 1 00:04:45.753 }, 00:04:45.753 { 00:04:45.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.753 "dma_device_type": 2 00:04:45.753 } 00:04:45.753 ], 00:04:45.753 "driver_specific": {} 00:04:45.753 } 00:04:45.753 ]' 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 [2024-07-15 11:30:53.675209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:45.753 [2024-07-15 11:30:53.675246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.753 [2024-07-15 11:30:53.675265] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f62f0 00:04:45.753 [2024-07-15 11:30:53.675278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.753 [2024-07-15 11:30:53.676396] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.753 [2024-07-15 11:30:53.676421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.753 Passthru0 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.753 { 00:04:45.753 "name": "Malloc2", 00:04:45.753 "aliases": [ 00:04:45.753 "67f85060-b530-44aa-9d76-29ec4ac232f0" 00:04:45.753 ], 00:04:45.753 "product_name": "Malloc disk", 00:04:45.753 "block_size": 512, 00:04:45.753 "num_blocks": 16384, 00:04:45.753 "uuid": "67f85060-b530-44aa-9d76-29ec4ac232f0", 00:04:45.753 "assigned_rate_limits": { 00:04:45.753 "rw_ios_per_sec": 0, 00:04:45.753 "rw_mbytes_per_sec": 0, 00:04:45.753 "r_mbytes_per_sec": 0, 00:04:45.753 "w_mbytes_per_sec": 0 00:04:45.753 }, 00:04:45.753 "claimed": true, 00:04:45.753 "claim_type": "exclusive_write", 00:04:45.753 "zoned": false, 00:04:45.753 "supported_io_types": { 00:04:45.753 "read": true, 00:04:45.753 "write": true, 00:04:45.753 "unmap": true, 00:04:45.753 "flush": true, 00:04:45.753 "reset": true, 00:04:45.753 "nvme_admin": false, 00:04:45.753 "nvme_io": false, 00:04:45.753 "nvme_io_md": false, 00:04:45.753 "write_zeroes": true, 00:04:45.753 "zcopy": true, 00:04:45.753 "get_zone_info": false, 00:04:45.753 "zone_management": false, 00:04:45.753 "zone_append": false, 00:04:45.753 "compare": false, 00:04:45.753 "compare_and_write": false, 00:04:45.753 "abort": true, 00:04:45.753 "seek_hole": false, 00:04:45.753 "seek_data": false, 00:04:45.753 "copy": true, 00:04:45.753 "nvme_iov_md": false 00:04:45.753 }, 00:04:45.753 "memory_domains": [ 00:04:45.753 { 00:04:45.753 "dma_device_id": "system", 00:04:45.753 "dma_device_type": 1 00:04:45.753 }, 00:04:45.753 { 00:04:45.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.753 "dma_device_type": 2 00:04:45.753 } 00:04:45.753 ], 00:04:45.753 "driver_specific": {} 00:04:45.753 }, 00:04:45.753 { 00:04:45.753 "name": "Passthru0", 00:04:45.753 "aliases": [ 00:04:45.753 "dc6b493f-4347-5a98-bfac-680ee9551043" 00:04:45.753 ], 00:04:45.753 "product_name": "passthru", 00:04:45.753 "block_size": 512, 00:04:45.753 "num_blocks": 16384, 00:04:45.753 "uuid": "dc6b493f-4347-5a98-bfac-680ee9551043", 00:04:45.753 "assigned_rate_limits": { 00:04:45.753 "rw_ios_per_sec": 0, 00:04:45.753 "rw_mbytes_per_sec": 0, 00:04:45.753 "r_mbytes_per_sec": 0, 00:04:45.753 "w_mbytes_per_sec": 0 00:04:45.753 }, 00:04:45.753 "claimed": false, 00:04:45.753 "zoned": false, 00:04:45.753 "supported_io_types": { 00:04:45.753 "read": true, 00:04:45.753 "write": true, 00:04:45.753 "unmap": true, 00:04:45.753 "flush": true, 00:04:45.753 "reset": true, 00:04:45.753 "nvme_admin": false, 00:04:45.753 "nvme_io": false, 00:04:45.753 "nvme_io_md": false, 00:04:45.753 "write_zeroes": true, 00:04:45.753 "zcopy": true, 00:04:45.753 "get_zone_info": false, 00:04:45.753 "zone_management": false, 00:04:45.753 "zone_append": false, 00:04:45.753 "compare": false, 00:04:45.753 "compare_and_write": false, 00:04:45.753 "abort": true, 00:04:45.753 "seek_hole": false, 00:04:45.753 "seek_data": false, 00:04:45.753 "copy": true, 00:04:45.753 "nvme_iov_md": false 00:04:45.753 }, 00:04:45.753 "memory_domains": [ 00:04:45.753 { 00:04:45.753 "dma_device_id": "system", 00:04:45.753 "dma_device_type": 1 00:04:45.753 }, 00:04:45.753 { 00:04:45.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.753 "dma_device_type": 2 00:04:45.753 } 00:04:45.753 ], 00:04:45.753 "driver_specific": { 00:04:45.753 "passthru": { 00:04:45.753 "name": "Passthru0", 00:04:45.753 "base_bdev_name": "Malloc2" 00:04:45.753 } 00:04:45.753 } 00:04:45.753 } 00:04:45.753 ]' 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.753 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.010 00:04:46.010 real 0m0.210s 00:04:46.010 user 0m0.132s 00:04:46.010 sys 0m0.021s 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.010 11:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.010 ************************************ 00:04:46.010 END TEST rpc_daemon_integrity 00:04:46.010 ************************************ 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.010 11:30:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.010 11:30:53 rpc -- rpc/rpc.sh@84 -- # killprocess 2901170 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@948 -- # '[' -z 2901170 ']' 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@952 -- # kill -0 2901170 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901170 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901170' 00:04:46.010 killing process with pid 2901170 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@967 -- # kill 2901170 00:04:46.010 11:30:53 rpc -- common/autotest_common.sh@972 -- # wait 2901170 00:04:46.576 00:04:46.576 real 0m1.859s 00:04:46.576 user 0m2.324s 00:04:46.576 sys 0m0.551s 00:04:46.576 11:30:54 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.576 11:30:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.576 ************************************ 00:04:46.576 END TEST rpc 00:04:46.576 ************************************ 00:04:46.576 11:30:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.576 11:30:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.576 11:30:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.576 11:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.576 11:30:54 -- common/autotest_common.sh@10 -- # set +x 00:04:46.576 ************************************ 00:04:46.576 START TEST skip_rpc 00:04:46.576 ************************************ 00:04:46.576 11:30:54 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.576 * Looking for test storage... 00:04:46.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.576 11:30:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.576 11:30:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.576 11:30:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.576 11:30:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.576 11:30:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.576 11:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.576 ************************************ 00:04:46.576 START TEST skip_rpc 00:04:46.576 ************************************ 00:04:46.576 11:30:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:46.576 11:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2901601 00:04:46.576 11:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.576 11:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.576 11:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.576 [2024-07-15 11:30:54.431986] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:46.576 [2024-07-15 11:30:54.432076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901601 ] 00:04:46.576 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.576 [2024-07-15 11:30:54.490123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.834 [2024-07-15 11:30:54.601944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2901601 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2901601 ']' 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2901601 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.094 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901601 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901601' 00:04:52.095 killing process with pid 2901601 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2901601 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2901601 00:04:52.095 00:04:52.095 real 0m5.454s 00:04:52.095 user 0m5.149s 00:04:52.095 sys 0m0.308s 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.095 11:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.095 ************************************ 00:04:52.095 END TEST skip_rpc 00:04:52.095 ************************************ 00:04:52.095 11:30:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.095 11:30:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.095 11:30:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.095 11:30:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.095 11:30:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.095 ************************************ 00:04:52.095 START TEST skip_rpc_with_json 00:04:52.095 ************************************ 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2902290 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2902290 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2902290 ']' 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.095 11:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.095 [2024-07-15 11:30:59.944136] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:52.095 [2024-07-15 11:30:59.944223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902290 ] 00:04:52.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.095 [2024-07-15 11:31:00.000932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.353 [2024-07-15 11:31:00.116132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 [2024-07-15 11:31:00.358488] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.612 request: 00:04:52.612 { 00:04:52.612 "trtype": "tcp", 00:04:52.612 "method": "nvmf_get_transports", 00:04:52.612 "req_id": 1 00:04:52.612 } 00:04:52.612 Got JSON-RPC error response 00:04:52.612 response: 00:04:52.612 { 00:04:52.612 "code": -19, 00:04:52.612 "message": "No such device" 00:04:52.612 } 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 [2024-07-15 11:31:00.366581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.612 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.612 { 00:04:52.612 "subsystems": [ 00:04:52.612 { 00:04:52.612 "subsystem": "vfio_user_target", 00:04:52.612 "config": null 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "keyring", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "iobuf", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "iobuf_set_options", 00:04:52.612 "params": { 00:04:52.612 "small_pool_count": 8192, 00:04:52.612 "large_pool_count": 1024, 00:04:52.612 "small_bufsize": 8192, 00:04:52.612 "large_bufsize": 135168 00:04:52.612 } 00:04:52.612 } 00:04:52.612 ] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "sock", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "sock_set_default_impl", 00:04:52.612 "params": { 00:04:52.612 "impl_name": "posix" 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "sock_impl_set_options", 00:04:52.612 "params": { 00:04:52.612 "impl_name": "ssl", 00:04:52.612 "recv_buf_size": 4096, 00:04:52.612 "send_buf_size": 4096, 00:04:52.612 "enable_recv_pipe": true, 00:04:52.612 "enable_quickack": false, 00:04:52.612 "enable_placement_id": 0, 00:04:52.612 "enable_zerocopy_send_server": true, 00:04:52.612 "enable_zerocopy_send_client": false, 00:04:52.612 "zerocopy_threshold": 0, 00:04:52.612 "tls_version": 0, 00:04:52.612 "enable_ktls": false 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "sock_impl_set_options", 00:04:52.612 "params": { 00:04:52.612 "impl_name": "posix", 00:04:52.612 "recv_buf_size": 2097152, 00:04:52.612 "send_buf_size": 2097152, 00:04:52.612 "enable_recv_pipe": true, 00:04:52.612 "enable_quickack": false, 00:04:52.612 "enable_placement_id": 0, 00:04:52.612 "enable_zerocopy_send_server": true, 00:04:52.612 "enable_zerocopy_send_client": false, 00:04:52.612 "zerocopy_threshold": 0, 00:04:52.612 "tls_version": 0, 00:04:52.612 "enable_ktls": false 00:04:52.612 } 00:04:52.612 } 00:04:52.612 ] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "vmd", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "accel", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "accel_set_options", 00:04:52.612 "params": { 00:04:52.612 "small_cache_size": 128, 00:04:52.612 "large_cache_size": 16, 00:04:52.612 "task_count": 2048, 00:04:52.612 "sequence_count": 2048, 00:04:52.612 "buf_count": 2048 00:04:52.612 } 00:04:52.612 } 00:04:52.612 ] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "bdev", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "bdev_set_options", 00:04:52.612 "params": { 00:04:52.612 "bdev_io_pool_size": 65535, 00:04:52.612 "bdev_io_cache_size": 256, 00:04:52.612 "bdev_auto_examine": true, 00:04:52.612 "iobuf_small_cache_size": 128, 00:04:52.612 "iobuf_large_cache_size": 16 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "bdev_raid_set_options", 00:04:52.612 "params": { 00:04:52.612 "process_window_size_kb": 1024 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "bdev_iscsi_set_options", 00:04:52.612 "params": { 00:04:52.612 "timeout_sec": 30 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "bdev_nvme_set_options", 00:04:52.612 "params": { 00:04:52.612 "action_on_timeout": "none", 00:04:52.612 "timeout_us": 0, 00:04:52.612 "timeout_admin_us": 0, 00:04:52.612 "keep_alive_timeout_ms": 10000, 00:04:52.612 "arbitration_burst": 0, 00:04:52.612 "low_priority_weight": 0, 00:04:52.612 "medium_priority_weight": 0, 00:04:52.612 "high_priority_weight": 0, 00:04:52.612 "nvme_adminq_poll_period_us": 10000, 00:04:52.612 "nvme_ioq_poll_period_us": 0, 00:04:52.612 "io_queue_requests": 0, 00:04:52.612 "delay_cmd_submit": true, 00:04:52.612 "transport_retry_count": 4, 00:04:52.612 "bdev_retry_count": 3, 00:04:52.612 "transport_ack_timeout": 0, 00:04:52.612 "ctrlr_loss_timeout_sec": 0, 00:04:52.612 "reconnect_delay_sec": 0, 00:04:52.612 "fast_io_fail_timeout_sec": 0, 00:04:52.612 "disable_auto_failback": false, 00:04:52.612 "generate_uuids": false, 00:04:52.612 "transport_tos": 0, 00:04:52.612 "nvme_error_stat": false, 00:04:52.612 "rdma_srq_size": 0, 00:04:52.612 "io_path_stat": false, 00:04:52.612 "allow_accel_sequence": false, 00:04:52.612 "rdma_max_cq_size": 0, 00:04:52.612 "rdma_cm_event_timeout_ms": 0, 00:04:52.612 "dhchap_digests": [ 00:04:52.612 "sha256", 00:04:52.612 "sha384", 00:04:52.612 "sha512" 00:04:52.612 ], 00:04:52.612 "dhchap_dhgroups": [ 00:04:52.612 "null", 00:04:52.612 "ffdhe2048", 00:04:52.612 "ffdhe3072", 00:04:52.612 "ffdhe4096", 00:04:52.612 "ffdhe6144", 00:04:52.612 "ffdhe8192" 00:04:52.612 ] 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "bdev_nvme_set_hotplug", 00:04:52.612 "params": { 00:04:52.612 "period_us": 100000, 00:04:52.612 "enable": false 00:04:52.612 } 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "method": "bdev_wait_for_examine" 00:04:52.612 } 00:04:52.612 ] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "scsi", 00:04:52.612 "config": null 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "scheduler", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "framework_set_scheduler", 00:04:52.612 "params": { 00:04:52.612 "name": "static" 00:04:52.612 } 00:04:52.612 } 00:04:52.612 ] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "vhost_scsi", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "vhost_blk", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "ublk", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "nbd", 00:04:52.612 "config": [] 00:04:52.612 }, 00:04:52.612 { 00:04:52.612 "subsystem": "nvmf", 00:04:52.612 "config": [ 00:04:52.612 { 00:04:52.612 "method": "nvmf_set_config", 00:04:52.612 "params": { 00:04:52.613 "discovery_filter": "match_any", 00:04:52.613 "admin_cmd_passthru": { 00:04:52.613 "identify_ctrlr": false 00:04:52.613 } 00:04:52.613 } 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "method": "nvmf_set_max_subsystems", 00:04:52.613 "params": { 00:04:52.613 "max_subsystems": 1024 00:04:52.613 } 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "method": "nvmf_set_crdt", 00:04:52.613 "params": { 00:04:52.613 "crdt1": 0, 00:04:52.613 "crdt2": 0, 00:04:52.613 "crdt3": 0 00:04:52.613 } 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "method": "nvmf_create_transport", 00:04:52.613 "params": { 00:04:52.613 "trtype": "TCP", 00:04:52.613 "max_queue_depth": 128, 00:04:52.613 "max_io_qpairs_per_ctrlr": 127, 00:04:52.613 "in_capsule_data_size": 4096, 00:04:52.613 "max_io_size": 131072, 00:04:52.613 "io_unit_size": 131072, 00:04:52.613 "max_aq_depth": 128, 00:04:52.613 "num_shared_buffers": 511, 00:04:52.613 "buf_cache_size": 4294967295, 00:04:52.613 "dif_insert_or_strip": false, 00:04:52.613 "zcopy": false, 00:04:52.613 "c2h_success": true, 00:04:52.613 "sock_priority": 0, 00:04:52.613 "abort_timeout_sec": 1, 00:04:52.613 "ack_timeout": 0, 00:04:52.613 "data_wr_pool_size": 0 00:04:52.613 } 00:04:52.613 } 00:04:52.613 ] 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "subsystem": "iscsi", 00:04:52.613 "config": [ 00:04:52.613 { 00:04:52.613 "method": "iscsi_set_options", 00:04:52.613 "params": { 00:04:52.613 "node_base": "iqn.2016-06.io.spdk", 00:04:52.613 "max_sessions": 128, 00:04:52.613 "max_connections_per_session": 2, 00:04:52.613 "max_queue_depth": 64, 00:04:52.613 "default_time2wait": 2, 00:04:52.613 "default_time2retain": 20, 00:04:52.613 "first_burst_length": 8192, 00:04:52.613 "immediate_data": true, 00:04:52.613 "allow_duplicated_isid": false, 00:04:52.613 "error_recovery_level": 0, 00:04:52.613 "nop_timeout": 60, 00:04:52.613 "nop_in_interval": 30, 00:04:52.613 "disable_chap": false, 00:04:52.613 "require_chap": false, 00:04:52.613 "mutual_chap": false, 00:04:52.613 "chap_group": 0, 00:04:52.613 "max_large_datain_per_connection": 64, 00:04:52.613 "max_r2t_per_connection": 4, 00:04:52.613 "pdu_pool_size": 36864, 00:04:52.613 "immediate_data_pool_size": 16384, 00:04:52.613 "data_out_pool_size": 2048 00:04:52.613 } 00:04:52.613 } 00:04:52.613 ] 00:04:52.613 } 00:04:52.613 ] 00:04:52.613 } 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2902290 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2902290 ']' 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2902290 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902290 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902290' 00:04:52.613 killing process with pid 2902290 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2902290 00:04:52.613 11:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2902290 00:04:53.178 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2902430 00:04:53.178 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.178 11:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2902430 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2902430 ']' 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2902430 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902430 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902430' 00:04:58.437 killing process with pid 2902430 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2902430 00:04:58.437 11:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2902430 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.697 00:04:58.697 real 0m6.543s 00:04:58.697 user 0m6.169s 00:04:58.697 sys 0m0.662s 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 ************************************ 00:04:58.697 END TEST skip_rpc_with_json 00:04:58.697 ************************************ 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.697 11:31:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 ************************************ 00:04:58.697 START TEST skip_rpc_with_delay 00:04:58.697 ************************************ 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.697 [2024-07-15 11:31:06.544599] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.697 [2024-07-15 11:31:06.544705] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.697 00:04:58.697 real 0m0.070s 00:04:58.697 user 0m0.044s 00:04:58.697 sys 0m0.025s 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.697 11:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 ************************************ 00:04:58.697 END TEST skip_rpc_with_delay 00:04:58.697 ************************************ 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.697 11:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.697 11:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.697 11:31:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.697 11:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 ************************************ 00:04:58.697 START TEST exit_on_failed_rpc_init 00:04:58.697 ************************************ 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2903143 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2903143 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2903143 ']' 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.697 11:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.697 [2024-07-15 11:31:06.663795] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:58.697 [2024-07-15 11:31:06.663895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903143 ] 00:04:58.955 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.955 [2024-07-15 11:31:06.721011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.955 [2024-07-15 11:31:06.820787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:59.213 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.213 [2024-07-15 11:31:07.117302] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:59.213 [2024-07-15 11:31:07.117394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903158 ] 00:04:59.213 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.213 [2024-07-15 11:31:07.173409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.471 [2024-07-15 11:31:07.285709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.471 [2024-07-15 11:31:07.285849] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.471 [2024-07-15 11:31:07.285871] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.471 [2024-07-15 11:31:07.285884] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2903143 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2903143 ']' 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2903143 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2903143 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2903143' 00:04:59.471 killing process with pid 2903143 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2903143 00:04:59.471 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2903143 00:05:00.037 00:05:00.037 real 0m1.278s 00:05:00.037 user 0m1.442s 00:05:00.037 sys 0m0.430s 00:05:00.037 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.037 11:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.037 ************************************ 00:05:00.037 END TEST exit_on_failed_rpc_init 00:05:00.037 ************************************ 00:05:00.037 11:31:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.037 11:31:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.037 00:05:00.037 real 0m13.604s 00:05:00.037 user 0m12.907s 00:05:00.037 sys 0m1.600s 00:05:00.037 11:31:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.037 11:31:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.037 ************************************ 00:05:00.037 END TEST skip_rpc 00:05:00.037 ************************************ 00:05:00.037 11:31:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.037 11:31:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.037 11:31:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.037 11:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.037 11:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.037 ************************************ 00:05:00.037 START TEST rpc_client 00:05:00.037 ************************************ 00:05:00.037 11:31:07 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.037 * Looking for test storage... 00:05:00.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:00.037 11:31:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:00.296 OK 00:05:00.296 11:31:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.296 00:05:00.296 real 0m0.072s 00:05:00.296 user 0m0.026s 00:05:00.296 sys 0m0.050s 00:05:00.296 11:31:08 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.296 11:31:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.296 ************************************ 00:05:00.296 END TEST rpc_client 00:05:00.296 ************************************ 00:05:00.296 11:31:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.296 11:31:08 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.296 11:31:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.296 11:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.296 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.296 ************************************ 00:05:00.296 START TEST json_config 00:05:00.296 ************************************ 00:05:00.296 11:31:08 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.296 11:31:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.296 11:31:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.296 11:31:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.296 11:31:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.296 11:31:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.296 11:31:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.296 11:31:08 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.296 11:31:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@47 -- # : 0 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:00.296 11:31:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.296 11:31:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:00.297 INFO: JSON configuration test init 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.297 11:31:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.297 11:31:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.297 11:31:08 json_config -- json_config/common.sh@10 -- # shift 00:05:00.297 11:31:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.297 11:31:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.297 11:31:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.297 11:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.297 11:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.297 11:31:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2903401 00:05:00.297 11:31:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.297 11:31:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.297 Waiting for target to run... 00:05:00.297 11:31:08 json_config -- json_config/common.sh@25 -- # waitforlisten 2903401 /var/tmp/spdk_tgt.sock 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 2903401 ']' 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.297 11:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.297 [2024-07-15 11:31:08.176863] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:00.297 [2024-07-15 11:31:08.176950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903401 ] 00:05:00.297 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.865 [2024-07-15 11:31:08.702054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.865 [2024-07-15 11:31:08.794357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:01.431 11:31:09 json_config -- json_config/common.sh@26 -- # echo '' 00:05:01.431 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.431 11:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:01.431 11:31:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:01.431 11:31:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.711 11:31:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.711 11:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:04.711 11:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:04.711 11:31:12 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:04.712 11:31:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.712 11:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:04.712 11:31:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.712 11:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:04.712 11:31:12 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.712 11:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.034 MallocForNvmf0 00:05:05.034 11:31:12 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.034 11:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.340 MallocForNvmf1 00:05:05.340 11:31:13 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.340 11:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.597 [2024-07-15 11:31:13.312572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.597 11:31:13 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.597 11:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:05.597 11:31:13 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.597 11:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:05.855 11:31:13 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.855 11:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.112 11:31:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.113 11:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.370 [2024-07-15 11:31:14.283637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.370 11:31:14 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:06.370 11:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.370 11:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.370 11:31:14 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:06.370 11:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.370 11:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.370 11:31:14 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:06.370 11:31:14 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.370 11:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.628 MallocBdevForConfigChangeCheck 00:05:06.628 11:31:14 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:06.628 11:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.628 11:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.628 11:31:14 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:06.628 11:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.206 11:31:14 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:07.206 INFO: shutting down applications... 00:05:07.206 11:31:14 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:07.206 11:31:14 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:07.206 11:31:14 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:07.206 11:31:14 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.102 Calling clear_iscsi_subsystem 00:05:09.102 Calling clear_nvmf_subsystem 00:05:09.102 Calling clear_nbd_subsystem 00:05:09.102 Calling clear_ublk_subsystem 00:05:09.102 Calling clear_vhost_blk_subsystem 00:05:09.102 Calling clear_vhost_scsi_subsystem 00:05:09.102 Calling clear_bdev_subsystem 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@345 -- # break 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:09.102 11:31:16 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:09.102 11:31:16 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.102 11:31:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.102 11:31:16 json_config -- json_config/common.sh@35 -- # [[ -n 2903401 ]] 00:05:09.102 11:31:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2903401 00:05:09.102 11:31:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.102 11:31:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.102 11:31:16 json_config -- json_config/common.sh@41 -- # kill -0 2903401 00:05:09.102 11:31:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.696 11:31:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.696 11:31:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.696 11:31:17 json_config -- json_config/common.sh@41 -- # kill -0 2903401 00:05:09.696 11:31:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.696 11:31:17 json_config -- json_config/common.sh@43 -- # break 00:05:09.696 11:31:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.696 11:31:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.696 SPDK target shutdown done 00:05:09.696 11:31:17 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:09.696 INFO: relaunching applications... 00:05:09.696 11:31:17 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.696 11:31:17 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.696 11:31:17 json_config -- json_config/common.sh@10 -- # shift 00:05:09.696 11:31:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.696 11:31:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.696 11:31:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.696 11:31:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.696 11:31:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.696 11:31:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2904712 00:05:09.696 11:31:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.696 11:31:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.696 Waiting for target to run... 00:05:09.696 11:31:17 json_config -- json_config/common.sh@25 -- # waitforlisten 2904712 /var/tmp/spdk_tgt.sock 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@829 -- # '[' -z 2904712 ']' 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.696 11:31:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.696 [2024-07-15 11:31:17.552753] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:09.696 [2024-07-15 11:31:17.552865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904712 ] 00:05:09.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.263 [2024-07-15 11:31:18.101993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.263 [2024-07-15 11:31:18.194933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.541 [2024-07-15 11:31:21.237907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.541 [2024-07-15 11:31:21.270327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.105 11:31:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.105 11:31:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:14.105 11:31:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.105 00:05:14.105 11:31:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:14.105 11:31:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.105 INFO: Checking if target configuration is the same... 00:05:14.105 11:31:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.105 11:31:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:14.105 11:31:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.105 + '[' 2 -ne 2 ']' 00:05:14.105 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.105 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.105 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.105 +++ basename /dev/fd/62 00:05:14.105 ++ mktemp /tmp/62.XXX 00:05:14.105 + tmp_file_1=/tmp/62.owh 00:05:14.106 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.106 + tmp_file_2=/tmp/spdk_tgt_config.json.lE4 00:05:14.106 + ret=0 00:05:14.106 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.362 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.619 + diff -u /tmp/62.owh /tmp/spdk_tgt_config.json.lE4 00:05:14.619 + echo 'INFO: JSON config files are the same' 00:05:14.619 INFO: JSON config files are the same 00:05:14.619 + rm /tmp/62.owh /tmp/spdk_tgt_config.json.lE4 00:05:14.619 + exit 0 00:05:14.619 11:31:22 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:14.619 11:31:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.619 INFO: changing configuration and checking if this can be detected... 00:05:14.619 11:31:22 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.619 11:31:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.877 11:31:22 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.877 11:31:22 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:14.877 11:31:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.877 + '[' 2 -ne 2 ']' 00:05:14.877 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.877 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.877 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.877 +++ basename /dev/fd/62 00:05:14.877 ++ mktemp /tmp/62.XXX 00:05:14.877 + tmp_file_1=/tmp/62.FAJ 00:05:14.877 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.877 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.877 + tmp_file_2=/tmp/spdk_tgt_config.json.7Lh 00:05:14.877 + ret=0 00:05:14.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.134 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.134 + diff -u /tmp/62.FAJ /tmp/spdk_tgt_config.json.7Lh 00:05:15.134 + ret=1 00:05:15.134 + echo '=== Start of file: /tmp/62.FAJ ===' 00:05:15.134 + cat /tmp/62.FAJ 00:05:15.134 + echo '=== End of file: /tmp/62.FAJ ===' 00:05:15.134 + echo '' 00:05:15.134 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7Lh ===' 00:05:15.134 + cat /tmp/spdk_tgt_config.json.7Lh 00:05:15.134 + echo '=== End of file: /tmp/spdk_tgt_config.json.7Lh ===' 00:05:15.134 + echo '' 00:05:15.134 + rm /tmp/62.FAJ /tmp/spdk_tgt_config.json.7Lh 00:05:15.134 + exit 1 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:15.134 INFO: configuration change detected. 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@317 -- # [[ -n 2904712 ]] 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 11:31:23 json_config -- json_config/json_config.sh@323 -- # killprocess 2904712 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@948 -- # '[' -z 2904712 ']' 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@952 -- # kill -0 2904712 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@953 -- # uname 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.134 11:31:23 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2904712 00:05:15.394 11:31:23 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.394 11:31:23 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.394 11:31:23 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2904712' 00:05:15.394 killing process with pid 2904712 00:05:15.394 11:31:23 json_config -- common/autotest_common.sh@967 -- # kill 2904712 00:05:15.394 11:31:23 json_config -- common/autotest_common.sh@972 -- # wait 2904712 00:05:17.293 11:31:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.293 11:31:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:17.293 11:31:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.293 11:31:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.293 11:31:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:17.293 11:31:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:17.293 INFO: Success 00:05:17.293 00:05:17.293 real 0m16.738s 00:05:17.293 user 0m18.468s 00:05:17.293 sys 0m2.277s 00:05:17.293 11:31:24 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.293 11:31:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.293 ************************************ 00:05:17.293 END TEST json_config 00:05:17.293 ************************************ 00:05:17.293 11:31:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.293 11:31:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.293 11:31:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.293 11:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.293 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.293 ************************************ 00:05:17.293 START TEST json_config_extra_key 00:05:17.293 ************************************ 00:05:17.293 11:31:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.293 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.293 11:31:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.293 11:31:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.293 11:31:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.293 11:31:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.294 11:31:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.294 11:31:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.294 11:31:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.294 11:31:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.294 11:31:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.294 11:31:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.294 INFO: launching applications... 00:05:17.294 11:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2905637 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.294 Waiting for target to run... 00:05:17.294 11:31:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2905637 /var/tmp/spdk_tgt.sock 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2905637 ']' 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.294 11:31:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.294 [2024-07-15 11:31:24.972101] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:17.294 [2024-07-15 11:31:24.972200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905637 ] 00:05:17.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.552 [2024-07-15 11:31:25.501577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.810 [2024-07-15 11:31:25.595659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.069 11:31:25 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.069 11:31:25 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.069 00:05:18.069 11:31:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.069 INFO: shutting down applications... 00:05:18.069 11:31:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2905637 ]] 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2905637 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2905637 00:05:18.069 11:31:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2905637 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.635 11:31:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.635 SPDK target shutdown done 00:05:18.635 11:31:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:18.635 Success 00:05:18.635 00:05:18.635 real 0m1.564s 00:05:18.635 user 0m1.373s 00:05:18.635 sys 0m0.637s 00:05:18.635 11:31:26 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.635 11:31:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.635 ************************************ 00:05:18.635 END TEST json_config_extra_key 00:05:18.635 ************************************ 00:05:18.635 11:31:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.635 11:31:26 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.635 11:31:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.635 11:31:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.635 11:31:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.635 ************************************ 00:05:18.635 START TEST alias_rpc 00:05:18.635 ************************************ 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.635 * Looking for test storage... 00:05:18.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:18.635 11:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.635 11:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2905942 00:05:18.635 11:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.635 11:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2905942 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2905942 ']' 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.635 11:31:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.635 [2024-07-15 11:31:26.587176] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:18.635 [2024-07-15 11:31:26.587271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905942 ] 00:05:18.635 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.894 [2024-07-15 11:31:26.645949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.894 [2024-07-15 11:31:26.751496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.152 11:31:26 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.152 11:31:27 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.152 11:31:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:19.410 11:31:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2905942 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2905942 ']' 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2905942 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2905942 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2905942' 00:05:19.410 killing process with pid 2905942 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@967 -- # kill 2905942 00:05:19.410 11:31:27 alias_rpc -- common/autotest_common.sh@972 -- # wait 2905942 00:05:19.977 00:05:19.977 real 0m1.245s 00:05:19.977 user 0m1.310s 00:05:19.977 sys 0m0.422s 00:05:19.977 11:31:27 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.977 11:31:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.977 ************************************ 00:05:19.977 END TEST alias_rpc 00:05:19.977 ************************************ 00:05:19.977 11:31:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.977 11:31:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:19.977 11:31:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.977 11:31:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.977 11:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.977 11:31:27 -- common/autotest_common.sh@10 -- # set +x 00:05:19.977 ************************************ 00:05:19.977 START TEST spdkcli_tcp 00:05:19.977 ************************************ 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.977 * Looking for test storage... 00:05:19.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2906127 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:19.977 11:31:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2906127 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2906127 ']' 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.977 11:31:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.977 [2024-07-15 11:31:27.890461] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:19.977 [2024-07-15 11:31:27.890547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906127 ] 00:05:19.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.977 [2024-07-15 11:31:27.947960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.235 [2024-07-15 11:31:28.056369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.235 [2024-07-15 11:31:28.056375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.494 11:31:28 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.494 11:31:28 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:20.494 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2906141 00:05:20.494 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:20.494 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:20.753 [ 00:05:20.753 "bdev_malloc_delete", 00:05:20.753 "bdev_malloc_create", 00:05:20.753 "bdev_null_resize", 00:05:20.753 "bdev_null_delete", 00:05:20.753 "bdev_null_create", 00:05:20.753 "bdev_nvme_cuse_unregister", 00:05:20.753 "bdev_nvme_cuse_register", 00:05:20.753 "bdev_opal_new_user", 00:05:20.753 "bdev_opal_set_lock_state", 00:05:20.753 "bdev_opal_delete", 00:05:20.753 "bdev_opal_get_info", 00:05:20.753 "bdev_opal_create", 00:05:20.753 "bdev_nvme_opal_revert", 00:05:20.753 "bdev_nvme_opal_init", 00:05:20.753 "bdev_nvme_send_cmd", 00:05:20.753 "bdev_nvme_get_path_iostat", 00:05:20.753 "bdev_nvme_get_mdns_discovery_info", 00:05:20.753 "bdev_nvme_stop_mdns_discovery", 00:05:20.753 "bdev_nvme_start_mdns_discovery", 00:05:20.753 "bdev_nvme_set_multipath_policy", 00:05:20.753 "bdev_nvme_set_preferred_path", 00:05:20.753 "bdev_nvme_get_io_paths", 00:05:20.753 "bdev_nvme_remove_error_injection", 00:05:20.753 "bdev_nvme_add_error_injection", 00:05:20.753 "bdev_nvme_get_discovery_info", 00:05:20.753 "bdev_nvme_stop_discovery", 00:05:20.753 "bdev_nvme_start_discovery", 00:05:20.753 "bdev_nvme_get_controller_health_info", 00:05:20.753 "bdev_nvme_disable_controller", 00:05:20.753 "bdev_nvme_enable_controller", 00:05:20.753 "bdev_nvme_reset_controller", 00:05:20.753 "bdev_nvme_get_transport_statistics", 00:05:20.753 "bdev_nvme_apply_firmware", 00:05:20.753 "bdev_nvme_detach_controller", 00:05:20.753 "bdev_nvme_get_controllers", 00:05:20.753 "bdev_nvme_attach_controller", 00:05:20.753 "bdev_nvme_set_hotplug", 00:05:20.753 "bdev_nvme_set_options", 00:05:20.753 "bdev_passthru_delete", 00:05:20.753 "bdev_passthru_create", 00:05:20.753 "bdev_lvol_set_parent_bdev", 00:05:20.753 "bdev_lvol_set_parent", 00:05:20.753 "bdev_lvol_check_shallow_copy", 00:05:20.753 "bdev_lvol_start_shallow_copy", 00:05:20.753 "bdev_lvol_grow_lvstore", 00:05:20.753 "bdev_lvol_get_lvols", 00:05:20.753 "bdev_lvol_get_lvstores", 00:05:20.753 "bdev_lvol_delete", 00:05:20.753 "bdev_lvol_set_read_only", 00:05:20.753 "bdev_lvol_resize", 00:05:20.753 "bdev_lvol_decouple_parent", 00:05:20.753 "bdev_lvol_inflate", 00:05:20.753 "bdev_lvol_rename", 00:05:20.753 "bdev_lvol_clone_bdev", 00:05:20.753 "bdev_lvol_clone", 00:05:20.753 "bdev_lvol_snapshot", 00:05:20.753 "bdev_lvol_create", 00:05:20.753 "bdev_lvol_delete_lvstore", 00:05:20.753 "bdev_lvol_rename_lvstore", 00:05:20.753 "bdev_lvol_create_lvstore", 00:05:20.753 "bdev_raid_set_options", 00:05:20.753 "bdev_raid_remove_base_bdev", 00:05:20.753 "bdev_raid_add_base_bdev", 00:05:20.753 "bdev_raid_delete", 00:05:20.753 "bdev_raid_create", 00:05:20.753 "bdev_raid_get_bdevs", 00:05:20.753 "bdev_error_inject_error", 00:05:20.753 "bdev_error_delete", 00:05:20.753 "bdev_error_create", 00:05:20.753 "bdev_split_delete", 00:05:20.753 "bdev_split_create", 00:05:20.753 "bdev_delay_delete", 00:05:20.753 "bdev_delay_create", 00:05:20.753 "bdev_delay_update_latency", 00:05:20.753 "bdev_zone_block_delete", 00:05:20.753 "bdev_zone_block_create", 00:05:20.753 "blobfs_create", 00:05:20.753 "blobfs_detect", 00:05:20.753 "blobfs_set_cache_size", 00:05:20.753 "bdev_aio_delete", 00:05:20.753 "bdev_aio_rescan", 00:05:20.753 "bdev_aio_create", 00:05:20.753 "bdev_ftl_set_property", 00:05:20.753 "bdev_ftl_get_properties", 00:05:20.753 "bdev_ftl_get_stats", 00:05:20.753 "bdev_ftl_unmap", 00:05:20.753 "bdev_ftl_unload", 00:05:20.753 "bdev_ftl_delete", 00:05:20.753 "bdev_ftl_load", 00:05:20.753 "bdev_ftl_create", 00:05:20.753 "bdev_virtio_attach_controller", 00:05:20.753 "bdev_virtio_scsi_get_devices", 00:05:20.753 "bdev_virtio_detach_controller", 00:05:20.753 "bdev_virtio_blk_set_hotplug", 00:05:20.753 "bdev_iscsi_delete", 00:05:20.753 "bdev_iscsi_create", 00:05:20.753 "bdev_iscsi_set_options", 00:05:20.753 "accel_error_inject_error", 00:05:20.753 "ioat_scan_accel_module", 00:05:20.753 "dsa_scan_accel_module", 00:05:20.753 "iaa_scan_accel_module", 00:05:20.753 "vfu_virtio_create_scsi_endpoint", 00:05:20.753 "vfu_virtio_scsi_remove_target", 00:05:20.753 "vfu_virtio_scsi_add_target", 00:05:20.753 "vfu_virtio_create_blk_endpoint", 00:05:20.753 "vfu_virtio_delete_endpoint", 00:05:20.753 "keyring_file_remove_key", 00:05:20.753 "keyring_file_add_key", 00:05:20.753 "keyring_linux_set_options", 00:05:20.753 "iscsi_get_histogram", 00:05:20.753 "iscsi_enable_histogram", 00:05:20.753 "iscsi_set_options", 00:05:20.753 "iscsi_get_auth_groups", 00:05:20.753 "iscsi_auth_group_remove_secret", 00:05:20.753 "iscsi_auth_group_add_secret", 00:05:20.753 "iscsi_delete_auth_group", 00:05:20.753 "iscsi_create_auth_group", 00:05:20.753 "iscsi_set_discovery_auth", 00:05:20.753 "iscsi_get_options", 00:05:20.753 "iscsi_target_node_request_logout", 00:05:20.753 "iscsi_target_node_set_redirect", 00:05:20.753 "iscsi_target_node_set_auth", 00:05:20.753 "iscsi_target_node_add_lun", 00:05:20.753 "iscsi_get_stats", 00:05:20.753 "iscsi_get_connections", 00:05:20.753 "iscsi_portal_group_set_auth", 00:05:20.753 "iscsi_start_portal_group", 00:05:20.753 "iscsi_delete_portal_group", 00:05:20.753 "iscsi_create_portal_group", 00:05:20.753 "iscsi_get_portal_groups", 00:05:20.753 "iscsi_delete_target_node", 00:05:20.753 "iscsi_target_node_remove_pg_ig_maps", 00:05:20.753 "iscsi_target_node_add_pg_ig_maps", 00:05:20.753 "iscsi_create_target_node", 00:05:20.753 "iscsi_get_target_nodes", 00:05:20.753 "iscsi_delete_initiator_group", 00:05:20.753 "iscsi_initiator_group_remove_initiators", 00:05:20.753 "iscsi_initiator_group_add_initiators", 00:05:20.753 "iscsi_create_initiator_group", 00:05:20.753 "iscsi_get_initiator_groups", 00:05:20.753 "nvmf_set_crdt", 00:05:20.753 "nvmf_set_config", 00:05:20.753 "nvmf_set_max_subsystems", 00:05:20.753 "nvmf_stop_mdns_prr", 00:05:20.753 "nvmf_publish_mdns_prr", 00:05:20.753 "nvmf_subsystem_get_listeners", 00:05:20.753 "nvmf_subsystem_get_qpairs", 00:05:20.753 "nvmf_subsystem_get_controllers", 00:05:20.753 "nvmf_get_stats", 00:05:20.753 "nvmf_get_transports", 00:05:20.753 "nvmf_create_transport", 00:05:20.753 "nvmf_get_targets", 00:05:20.753 "nvmf_delete_target", 00:05:20.753 "nvmf_create_target", 00:05:20.753 "nvmf_subsystem_allow_any_host", 00:05:20.753 "nvmf_subsystem_remove_host", 00:05:20.753 "nvmf_subsystem_add_host", 00:05:20.753 "nvmf_ns_remove_host", 00:05:20.753 "nvmf_ns_add_host", 00:05:20.753 "nvmf_subsystem_remove_ns", 00:05:20.753 "nvmf_subsystem_add_ns", 00:05:20.753 "nvmf_subsystem_listener_set_ana_state", 00:05:20.753 "nvmf_discovery_get_referrals", 00:05:20.753 "nvmf_discovery_remove_referral", 00:05:20.753 "nvmf_discovery_add_referral", 00:05:20.753 "nvmf_subsystem_remove_listener", 00:05:20.753 "nvmf_subsystem_add_listener", 00:05:20.753 "nvmf_delete_subsystem", 00:05:20.753 "nvmf_create_subsystem", 00:05:20.753 "nvmf_get_subsystems", 00:05:20.753 "env_dpdk_get_mem_stats", 00:05:20.753 "nbd_get_disks", 00:05:20.753 "nbd_stop_disk", 00:05:20.753 "nbd_start_disk", 00:05:20.753 "ublk_recover_disk", 00:05:20.753 "ublk_get_disks", 00:05:20.753 "ublk_stop_disk", 00:05:20.753 "ublk_start_disk", 00:05:20.753 "ublk_destroy_target", 00:05:20.753 "ublk_create_target", 00:05:20.753 "virtio_blk_create_transport", 00:05:20.753 "virtio_blk_get_transports", 00:05:20.753 "vhost_controller_set_coalescing", 00:05:20.754 "vhost_get_controllers", 00:05:20.754 "vhost_delete_controller", 00:05:20.754 "vhost_create_blk_controller", 00:05:20.754 "vhost_scsi_controller_remove_target", 00:05:20.754 "vhost_scsi_controller_add_target", 00:05:20.754 "vhost_start_scsi_controller", 00:05:20.754 "vhost_create_scsi_controller", 00:05:20.754 "thread_set_cpumask", 00:05:20.754 "framework_get_governor", 00:05:20.754 "framework_get_scheduler", 00:05:20.754 "framework_set_scheduler", 00:05:20.754 "framework_get_reactors", 00:05:20.754 "thread_get_io_channels", 00:05:20.754 "thread_get_pollers", 00:05:20.754 "thread_get_stats", 00:05:20.754 "framework_monitor_context_switch", 00:05:20.754 "spdk_kill_instance", 00:05:20.754 "log_enable_timestamps", 00:05:20.754 "log_get_flags", 00:05:20.754 "log_clear_flag", 00:05:20.754 "log_set_flag", 00:05:20.754 "log_get_level", 00:05:20.754 "log_set_level", 00:05:20.754 "log_get_print_level", 00:05:20.754 "log_set_print_level", 00:05:20.754 "framework_enable_cpumask_locks", 00:05:20.754 "framework_disable_cpumask_locks", 00:05:20.754 "framework_wait_init", 00:05:20.754 "framework_start_init", 00:05:20.754 "scsi_get_devices", 00:05:20.754 "bdev_get_histogram", 00:05:20.754 "bdev_enable_histogram", 00:05:20.754 "bdev_set_qos_limit", 00:05:20.754 "bdev_set_qd_sampling_period", 00:05:20.754 "bdev_get_bdevs", 00:05:20.754 "bdev_reset_iostat", 00:05:20.754 "bdev_get_iostat", 00:05:20.754 "bdev_examine", 00:05:20.754 "bdev_wait_for_examine", 00:05:20.754 "bdev_set_options", 00:05:20.754 "notify_get_notifications", 00:05:20.754 "notify_get_types", 00:05:20.754 "accel_get_stats", 00:05:20.754 "accel_set_options", 00:05:20.754 "accel_set_driver", 00:05:20.754 "accel_crypto_key_destroy", 00:05:20.754 "accel_crypto_keys_get", 00:05:20.754 "accel_crypto_key_create", 00:05:20.754 "accel_assign_opc", 00:05:20.754 "accel_get_module_info", 00:05:20.754 "accel_get_opc_assignments", 00:05:20.754 "vmd_rescan", 00:05:20.754 "vmd_remove_device", 00:05:20.754 "vmd_enable", 00:05:20.754 "sock_get_default_impl", 00:05:20.754 "sock_set_default_impl", 00:05:20.754 "sock_impl_set_options", 00:05:20.754 "sock_impl_get_options", 00:05:20.754 "iobuf_get_stats", 00:05:20.754 "iobuf_set_options", 00:05:20.754 "keyring_get_keys", 00:05:20.754 "framework_get_pci_devices", 00:05:20.754 "framework_get_config", 00:05:20.754 "framework_get_subsystems", 00:05:20.754 "vfu_tgt_set_base_path", 00:05:20.754 "trace_get_info", 00:05:20.754 "trace_get_tpoint_group_mask", 00:05:20.754 "trace_disable_tpoint_group", 00:05:20.754 "trace_enable_tpoint_group", 00:05:20.754 "trace_clear_tpoint_mask", 00:05:20.754 "trace_set_tpoint_mask", 00:05:20.754 "spdk_get_version", 00:05:20.754 "rpc_get_methods" 00:05:20.754 ] 00:05:20.754 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:20.754 11:31:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2906127 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2906127 ']' 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2906127 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2906127 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2906127' 00:05:20.754 killing process with pid 2906127 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2906127 00:05:20.754 11:31:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2906127 00:05:21.320 00:05:21.320 real 0m1.269s 00:05:21.320 user 0m2.232s 00:05:21.320 sys 0m0.439s 00:05:21.320 11:31:29 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.320 11:31:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.320 ************************************ 00:05:21.320 END TEST spdkcli_tcp 00:05:21.320 ************************************ 00:05:21.320 11:31:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.320 11:31:29 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.320 11:31:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.320 11:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.320 11:31:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.320 ************************************ 00:05:21.320 START TEST dpdk_mem_utility 00:05:21.320 ************************************ 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.320 * Looking for test storage... 00:05:21.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:21.320 11:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.320 11:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2906335 00:05:21.320 11:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.320 11:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2906335 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2906335 ']' 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.320 11:31:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.320 [2024-07-15 11:31:29.202330] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:21.320 [2024-07-15 11:31:29.202425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906335 ] 00:05:21.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.320 [2024-07-15 11:31:29.263357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.578 [2024-07-15 11:31:29.370551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.512 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.512 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:22.512 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.512 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.512 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.512 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.512 { 00:05:22.512 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.512 } 00:05:22.512 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.512 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.512 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:22.512 1 heaps totaling size 814.000000 MiB 00:05:22.512 size: 814.000000 MiB heap id: 0 00:05:22.512 end heaps---------- 00:05:22.512 8 mempools totaling size 598.116089 MiB 00:05:22.512 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.512 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.512 size: 84.521057 MiB name: bdev_io_2906335 00:05:22.512 size: 51.011292 MiB name: evtpool_2906335 00:05:22.512 size: 50.003479 MiB name: msgpool_2906335 00:05:22.512 size: 21.763794 MiB name: PDU_Pool 00:05:22.512 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.512 size: 0.026123 MiB name: Session_Pool 00:05:22.512 end mempools------- 00:05:22.512 6 memzones totaling size 4.142822 MiB 00:05:22.512 size: 1.000366 MiB name: RG_ring_0_2906335 00:05:22.512 size: 1.000366 MiB name: RG_ring_1_2906335 00:05:22.512 size: 1.000366 MiB name: RG_ring_4_2906335 00:05:22.512 size: 1.000366 MiB name: RG_ring_5_2906335 00:05:22.512 size: 0.125366 MiB name: RG_ring_2_2906335 00:05:22.512 size: 0.015991 MiB name: RG_ring_3_2906335 00:05:22.512 end memzones------- 00:05:22.512 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.512 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:22.512 list of free elements. size: 12.519348 MiB 00:05:22.512 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:22.512 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:22.512 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:22.512 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:22.512 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:22.512 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:22.512 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:22.512 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:22.512 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:22.512 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:22.512 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:22.512 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:22.512 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:22.512 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:22.512 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:22.512 list of standard malloc elements. size: 199.218079 MiB 00:05:22.512 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:22.512 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:22.512 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:22.512 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:22.512 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:22.512 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:22.512 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:22.512 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:22.512 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:22.512 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:22.512 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:22.512 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:22.512 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:22.512 list of memzone associated elements. size: 602.262573 MiB 00:05:22.512 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:22.512 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.512 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:22.512 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.512 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:22.512 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2906335_0 00:05:22.512 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:22.512 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2906335_0 00:05:22.512 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:22.512 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2906335_0 00:05:22.513 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:22.513 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.513 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:22.513 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.513 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:22.513 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2906335 00:05:22.513 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:22.513 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2906335 00:05:22.513 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:22.513 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2906335 00:05:22.513 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:22.513 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.513 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:22.513 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.513 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:22.513 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.513 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:22.513 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.513 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:22.513 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2906335 00:05:22.513 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:22.513 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2906335 00:05:22.513 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:22.513 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2906335 00:05:22.513 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:22.513 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2906335 00:05:22.513 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:22.513 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2906335 00:05:22.513 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:22.513 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.513 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:22.513 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.513 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:22.513 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.513 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:22.513 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2906335 00:05:22.513 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:22.513 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.513 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:22.513 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.513 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:22.513 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2906335 00:05:22.513 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:22.513 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.513 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:22.513 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2906335 00:05:22.513 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:22.513 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2906335 00:05:22.513 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:22.513 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.513 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.513 11:31:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2906335 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2906335 ']' 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2906335 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2906335 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2906335' 00:05:22.513 killing process with pid 2906335 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2906335 00:05:22.513 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2906335 00:05:22.772 00:05:22.772 real 0m1.615s 00:05:22.772 user 0m1.766s 00:05:22.772 sys 0m0.429s 00:05:22.772 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.772 11:31:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 ************************************ 00:05:22.772 END TEST dpdk_mem_utility 00:05:22.772 ************************************ 00:05:22.772 11:31:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.772 11:31:30 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:22.772 11:31:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.772 11:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.772 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 ************************************ 00:05:22.772 START TEST event 00:05:22.772 ************************************ 00:05:22.772 11:31:30 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.030 * Looking for test storage... 00:05:23.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.030 11:31:30 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:23.030 11:31:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.030 11:31:30 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.030 11:31:30 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:23.030 11:31:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.030 11:31:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.030 ************************************ 00:05:23.030 START TEST event_perf 00:05:23.030 ************************************ 00:05:23.030 11:31:30 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.030 Running I/O for 1 seconds...[2024-07-15 11:31:30.849114] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:23.030 [2024-07-15 11:31:30.849188] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906645 ] 00:05:23.030 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.030 [2024-07-15 11:31:30.907898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.030 [2024-07-15 11:31:31.015311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.030 [2024-07-15 11:31:31.015376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.030 [2024-07-15 11:31:31.015597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.030 [2024-07-15 11:31:31.015607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.403 Running I/O for 1 seconds... 00:05:24.403 lcore 0: 224743 00:05:24.403 lcore 1: 224743 00:05:24.403 lcore 2: 224744 00:05:24.403 lcore 3: 224743 00:05:24.403 done. 00:05:24.403 00:05:24.403 real 0m1.290s 00:05:24.403 user 0m4.210s 00:05:24.403 sys 0m0.075s 00:05:24.403 11:31:32 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.403 11:31:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.403 ************************************ 00:05:24.403 END TEST event_perf 00:05:24.403 ************************************ 00:05:24.403 11:31:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.403 11:31:32 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.403 11:31:32 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:24.403 11:31:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.403 11:31:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.403 ************************************ 00:05:24.403 START TEST event_reactor 00:05:24.403 ************************************ 00:05:24.403 11:31:32 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.403 [2024-07-15 11:31:32.184734] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:24.403 [2024-07-15 11:31:32.184824] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906812 ] 00:05:24.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.403 [2024-07-15 11:31:32.243017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.403 [2024-07-15 11:31:32.347140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.776 test_start 00:05:25.776 oneshot 00:05:25.776 tick 100 00:05:25.776 tick 100 00:05:25.776 tick 250 00:05:25.776 tick 100 00:05:25.776 tick 100 00:05:25.776 tick 100 00:05:25.776 tick 250 00:05:25.776 tick 500 00:05:25.776 tick 100 00:05:25.776 tick 100 00:05:25.776 tick 250 00:05:25.776 tick 100 00:05:25.776 tick 100 00:05:25.776 test_end 00:05:25.776 00:05:25.776 real 0m1.287s 00:05:25.776 user 0m1.215s 00:05:25.776 sys 0m0.068s 00:05:25.776 11:31:33 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.776 11:31:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:25.776 ************************************ 00:05:25.776 END TEST event_reactor 00:05:25.776 ************************************ 00:05:25.776 11:31:33 event -- common/autotest_common.sh@1142 -- # return 0 00:05:25.776 11:31:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.776 11:31:33 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:25.776 11:31:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.776 11:31:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.776 ************************************ 00:05:25.776 START TEST event_reactor_perf 00:05:25.776 ************************************ 00:05:25.776 11:31:33 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.776 [2024-07-15 11:31:33.525332] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:25.776 [2024-07-15 11:31:33.525400] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906965 ] 00:05:25.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.776 [2024-07-15 11:31:33.586268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.776 [2024-07-15 11:31:33.688625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.150 test_start 00:05:27.150 test_end 00:05:27.150 Performance: 448177 events per second 00:05:27.150 00:05:27.150 real 0m1.289s 00:05:27.150 user 0m1.197s 00:05:27.150 sys 0m0.087s 00:05:27.150 11:31:34 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.150 11:31:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.150 ************************************ 00:05:27.150 END TEST event_reactor_perf 00:05:27.150 ************************************ 00:05:27.150 11:31:34 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.150 11:31:34 event -- event/event.sh@49 -- # uname -s 00:05:27.150 11:31:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.150 11:31:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.150 11:31:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.150 11:31:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.150 11:31:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.150 ************************************ 00:05:27.150 START TEST event_scheduler 00:05:27.150 ************************************ 00:05:27.150 11:31:34 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.150 * Looking for test storage... 00:05:27.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:27.150 11:31:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.150 11:31:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2907145 00:05:27.150 11:31:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.151 11:31:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.151 11:31:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2907145 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2907145 ']' 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.151 11:31:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.151 [2024-07-15 11:31:34.951257] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:27.151 [2024-07-15 11:31:34.951329] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907145 ] 00:05:27.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.151 [2024-07-15 11:31:35.014502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.151 [2024-07-15 11:31:35.126860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.151 [2024-07-15 11:31:35.126941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.151 [2024-07-15 11:31:35.126883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.151 [2024-07-15 11:31:35.126944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:27.410 11:31:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 [2024-07-15 11:31:35.171779] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:27.410 [2024-07-15 11:31:35.171806] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:27.410 [2024-07-15 11:31:35.171823] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:27.410 [2024-07-15 11:31:35.171835] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:27.410 [2024-07-15 11:31:35.171846] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 [2024-07-15 11:31:35.263296] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 ************************************ 00:05:27.410 START TEST scheduler_create_thread 00:05:27.410 ************************************ 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 2 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 3 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 4 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 5 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 6 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 7 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 8 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.410 9 00:05:27.410 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.411 10 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.411 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.977 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.977 00:05:27.977 real 0m0.589s 00:05:27.977 user 0m0.009s 00:05:27.977 sys 0m0.003s 00:05:27.977 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.977 11:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.977 ************************************ 00:05:27.977 END TEST scheduler_create_thread 00:05:27.977 ************************************ 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:27.977 11:31:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.977 11:31:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2907145 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2907145 ']' 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2907145 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2907145 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2907145' 00:05:27.977 killing process with pid 2907145 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2907145 00:05:27.977 11:31:35 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2907145 00:05:28.542 [2024-07-15 11:31:36.359450] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.801 00:05:28.801 real 0m1.762s 00:05:28.801 user 0m2.223s 00:05:28.801 sys 0m0.321s 00:05:28.801 11:31:36 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.801 11:31:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.801 ************************************ 00:05:28.801 END TEST event_scheduler 00:05:28.801 ************************************ 00:05:28.801 11:31:36 event -- common/autotest_common.sh@1142 -- # return 0 00:05:28.801 11:31:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.801 11:31:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.801 11:31:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.801 11:31:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.801 11:31:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.801 ************************************ 00:05:28.801 START TEST app_repeat 00:05:28.801 ************************************ 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2907456 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2907456' 00:05:28.801 Process app_repeat pid: 2907456 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.801 spdk_app_start Round 0 00:05:28.801 11:31:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2907456 /var/tmp/spdk-nbd.sock 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2907456 ']' 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.801 11:31:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.801 [2024-07-15 11:31:36.699288] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:28.801 [2024-07-15 11:31:36.699356] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907456 ] 00:05:28.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.801 [2024-07-15 11:31:36.758286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.060 [2024-07-15 11:31:36.872509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.060 [2024-07-15 11:31:36.872513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.060 11:31:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.060 11:31:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.060 11:31:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.364 Malloc0 00:05:29.364 11:31:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.647 Malloc1 00:05:29.647 11:31:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.647 11:31:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.647 11:31:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.647 11:31:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.647 11:31:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.648 11:31:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.906 /dev/nbd0 00:05:29.906 11:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.906 11:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.906 11:31:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.906 1+0 records in 00:05:29.906 1+0 records out 00:05:29.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147571 s, 27.8 MB/s 00:05:29.907 11:31:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.907 11:31:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.907 11:31:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.907 11:31:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.907 11:31:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.907 11:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.907 11:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.907 11:31:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.165 /dev/nbd1 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.165 1+0 records in 00:05:30.165 1+0 records out 00:05:30.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195276 s, 21.0 MB/s 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.165 11:31:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.165 11:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.424 { 00:05:30.424 "nbd_device": "/dev/nbd0", 00:05:30.424 "bdev_name": "Malloc0" 00:05:30.424 }, 00:05:30.424 { 00:05:30.424 "nbd_device": "/dev/nbd1", 00:05:30.424 "bdev_name": "Malloc1" 00:05:30.424 } 00:05:30.424 ]' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.424 { 00:05:30.424 "nbd_device": "/dev/nbd0", 00:05:30.424 "bdev_name": "Malloc0" 00:05:30.424 }, 00:05:30.424 { 00:05:30.424 "nbd_device": "/dev/nbd1", 00:05:30.424 "bdev_name": "Malloc1" 00:05:30.424 } 00:05:30.424 ]' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.424 /dev/nbd1' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.424 /dev/nbd1' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.424 256+0 records in 00:05:30.424 256+0 records out 00:05:30.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496871 s, 211 MB/s 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.424 256+0 records in 00:05:30.424 256+0 records out 00:05:30.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215342 s, 48.7 MB/s 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.424 11:31:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.681 256+0 records in 00:05:30.681 256+0 records out 00:05:30.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264528 s, 39.6 MB/s 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.681 11:31:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.682 11:31:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.940 11:31:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.197 11:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.455 11:31:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.455 11:31:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.713 11:31:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.972 [2024-07-15 11:31:39.796808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.972 [2024-07-15 11:31:39.900201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.972 [2024-07-15 11:31:39.900202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.972 [2024-07-15 11:31:39.958508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.972 [2024-07-15 11:31:39.958571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.254 11:31:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.254 11:31:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.254 spdk_app_start Round 1 00:05:35.254 11:31:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2907456 /var/tmp/spdk-nbd.sock 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2907456 ']' 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.254 11:31:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.254 11:31:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.254 Malloc0 00:05:35.254 11:31:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.511 Malloc1 00:05:35.511 11:31:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.511 11:31:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.769 /dev/nbd0 00:05:35.769 11:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.769 11:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.769 1+0 records in 00:05:35.769 1+0 records out 00:05:35.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162139 s, 25.3 MB/s 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.769 11:31:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.769 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.769 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.769 11:31:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.026 /dev/nbd1 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.026 1+0 records in 00:05:36.026 1+0 records out 00:05:36.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224541 s, 18.2 MB/s 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.026 11:31:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.026 11:31:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.284 { 00:05:36.284 "nbd_device": "/dev/nbd0", 00:05:36.284 "bdev_name": "Malloc0" 00:05:36.284 }, 00:05:36.284 { 00:05:36.284 "nbd_device": "/dev/nbd1", 00:05:36.284 "bdev_name": "Malloc1" 00:05:36.284 } 00:05:36.284 ]' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.284 { 00:05:36.284 "nbd_device": "/dev/nbd0", 00:05:36.284 "bdev_name": "Malloc0" 00:05:36.284 }, 00:05:36.284 { 00:05:36.284 "nbd_device": "/dev/nbd1", 00:05:36.284 "bdev_name": "Malloc1" 00:05:36.284 } 00:05:36.284 ]' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.284 /dev/nbd1' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.284 /dev/nbd1' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.284 256+0 records in 00:05:36.284 256+0 records out 00:05:36.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050269 s, 209 MB/s 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.284 256+0 records in 00:05:36.284 256+0 records out 00:05:36.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023204 s, 45.2 MB/s 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.284 256+0 records in 00:05:36.284 256+0 records out 00:05:36.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244193 s, 42.9 MB/s 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.284 11:31:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.285 11:31:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.542 11:31:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.800 11:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.057 11:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.057 11:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.057 11:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.057 11:31:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.057 11:31:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.314 11:31:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.572 [2024-07-15 11:31:45.548260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.829 [2024-07-15 11:31:45.651261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.829 [2024-07-15 11:31:45.651266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.829 [2024-07-15 11:31:45.710004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.829 [2024-07-15 11:31:45.710106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.353 11:31:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.353 11:31:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.353 spdk_app_start Round 2 00:05:40.353 11:31:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2907456 /var/tmp/spdk-nbd.sock 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2907456 ']' 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.353 11:31:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.610 11:31:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.610 11:31:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.610 11:31:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.888 Malloc0 00:05:40.888 11:31:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.146 Malloc1 00:05:41.146 11:31:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.146 11:31:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.147 11:31:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.405 /dev/nbd0 00:05:41.405 11:31:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.405 11:31:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.405 1+0 records in 00:05:41.405 1+0 records out 00:05:41.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194249 s, 21.1 MB/s 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.405 11:31:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.405 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.405 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.405 11:31:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.664 /dev/nbd1 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.664 1+0 records in 00:05:41.664 1+0 records out 00:05:41.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184392 s, 22.2 MB/s 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.664 11:31:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.664 11:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.923 11:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.923 { 00:05:41.923 "nbd_device": "/dev/nbd0", 00:05:41.923 "bdev_name": "Malloc0" 00:05:41.923 }, 00:05:41.923 { 00:05:41.923 "nbd_device": "/dev/nbd1", 00:05:41.923 "bdev_name": "Malloc1" 00:05:41.923 } 00:05:41.923 ]' 00:05:41.923 11:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.923 { 00:05:41.923 "nbd_device": "/dev/nbd0", 00:05:41.923 "bdev_name": "Malloc0" 00:05:41.923 }, 00:05:41.923 { 00:05:41.923 "nbd_device": "/dev/nbd1", 00:05:41.923 "bdev_name": "Malloc1" 00:05:41.923 } 00:05:41.923 ]' 00:05:41.923 11:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.182 /dev/nbd1' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.182 /dev/nbd1' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.182 256+0 records in 00:05:42.182 256+0 records out 00:05:42.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515599 s, 203 MB/s 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.182 256+0 records in 00:05:42.182 256+0 records out 00:05:42.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206369 s, 50.8 MB/s 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.182 256+0 records in 00:05:42.182 256+0 records out 00:05:42.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246884 s, 42.5 MB/s 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.182 11:31:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.182 11:31:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.440 11:31:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.440 11:31:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.440 11:31:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.441 11:31:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.698 11:31:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.956 11:31:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.956 11:31:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.214 11:31:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.472 [2024-07-15 11:31:51.366716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.730 [2024-07-15 11:31:51.469709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.730 [2024-07-15 11:31:51.469709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.730 [2024-07-15 11:31:51.528418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.730 [2024-07-15 11:31:51.528490] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.256 11:31:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2907456 /var/tmp/spdk-nbd.sock 00:05:46.256 11:31:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2907456 ']' 00:05:46.257 11:31:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.257 11:31:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.257 11:31:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.257 11:31:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.257 11:31:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:46.515 11:31:54 event.app_repeat -- event/event.sh@39 -- # killprocess 2907456 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2907456 ']' 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2907456 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2907456 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2907456' 00:05:46.515 killing process with pid 2907456 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2907456 00:05:46.515 11:31:54 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2907456 00:05:46.773 spdk_app_start is called in Round 0. 00:05:46.773 Shutdown signal received, stop current app iteration 00:05:46.773 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:46.773 spdk_app_start is called in Round 1. 00:05:46.773 Shutdown signal received, stop current app iteration 00:05:46.773 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:46.773 spdk_app_start is called in Round 2. 00:05:46.773 Shutdown signal received, stop current app iteration 00:05:46.773 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:46.773 spdk_app_start is called in Round 3. 00:05:46.773 Shutdown signal received, stop current app iteration 00:05:46.773 11:31:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.773 11:31:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.773 00:05:46.773 real 0m17.951s 00:05:46.773 user 0m38.918s 00:05:46.773 sys 0m3.230s 00:05:46.773 11:31:54 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.773 11:31:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.773 ************************************ 00:05:46.773 END TEST app_repeat 00:05:46.773 ************************************ 00:05:46.773 11:31:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:46.773 11:31:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.773 11:31:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.773 11:31:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.773 11:31:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.773 11:31:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.773 ************************************ 00:05:46.773 START TEST cpu_locks 00:05:46.773 ************************************ 00:05:46.773 11:31:54 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.773 * Looking for test storage... 00:05:46.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.773 11:31:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.773 11:31:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.773 11:31:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.773 11:31:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.773 11:31:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.773 11:31:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.773 11:31:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.773 ************************************ 00:05:46.773 START TEST default_locks 00:05:46.773 ************************************ 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2909807 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2909807 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2909807 ']' 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.773 11:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.032 [2024-07-15 11:31:54.808455] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:47.032 [2024-07-15 11:31:54.808521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909807 ] 00:05:47.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.032 [2024-07-15 11:31:54.864221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.032 [2024-07-15 11:31:54.967149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.289 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.289 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:47.289 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2909807 00:05:47.289 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2909807 00:05:47.289 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.547 lslocks: write error 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2909807 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2909807 ']' 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2909807 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.547 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2909807 00:05:47.805 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.805 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.805 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2909807' 00:05:47.805 killing process with pid 2909807 00:05:47.805 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2909807 00:05:47.805 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2909807 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2909807 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2909807 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2909807 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2909807 ']' 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2909807) - No such process 00:05:48.066 ERROR: process (pid: 2909807) is no longer running 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.066 00:05:48.066 real 0m1.245s 00:05:48.066 user 0m1.195s 00:05:48.066 sys 0m0.518s 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.066 11:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.066 ************************************ 00:05:48.066 END TEST default_locks 00:05:48.066 ************************************ 00:05:48.066 11:31:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:48.066 11:31:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.066 11:31:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.066 11:31:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.066 11:31:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.066 ************************************ 00:05:48.066 START TEST default_locks_via_rpc 00:05:48.066 ************************************ 00:05:48.066 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:48.066 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2909971 00:05:48.066 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.066 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2909971 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2909971 ']' 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.325 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.325 [2024-07-15 11:31:56.103156] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:48.325 [2024-07-15 11:31:56.103258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909971 ] 00:05:48.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.325 [2024-07-15 11:31:56.160032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.325 [2024-07-15 11:31:56.266042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2909971 00:05:48.583 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2909971 00:05:48.584 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.841 11:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2909971 00:05:48.841 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2909971 ']' 00:05:48.841 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2909971 00:05:48.841 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2909971 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2909971' 00:05:48.842 killing process with pid 2909971 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2909971 00:05:48.842 11:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2909971 00:05:49.407 00:05:49.407 real 0m1.168s 00:05:49.407 user 0m1.111s 00:05:49.407 sys 0m0.499s 00:05:49.407 11:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.407 11:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.407 ************************************ 00:05:49.407 END TEST default_locks_via_rpc 00:05:49.407 ************************************ 00:05:49.407 11:31:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.407 11:31:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.407 11:31:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.407 11:31:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.407 11:31:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.407 ************************************ 00:05:49.407 START TEST non_locking_app_on_locked_coremask 00:05:49.407 ************************************ 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2910137 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2910137 /var/tmp/spdk.sock 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2910137 ']' 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.407 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.407 [2024-07-15 11:31:57.327588] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:49.407 [2024-07-15 11:31:57.327668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910137 ] 00:05:49.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.407 [2024-07-15 11:31:57.387979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.665 [2024-07-15 11:31:57.502970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2910251 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2910251 /var/tmp/spdk2.sock 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2910251 ']' 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.923 11:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.923 [2024-07-15 11:31:57.788353] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:49.923 [2024-07-15 11:31:57.788429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910251 ] 00:05:49.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.923 [2024-07-15 11:31:57.870403] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.923 [2024-07-15 11:31:57.870428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.181 [2024-07-15 11:31:58.084187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.747 11:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.747 11:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.747 11:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2910137 00:05:50.747 11:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.747 11:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2910137 00:05:51.312 lslocks: write error 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2910137 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2910137 ']' 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2910137 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910137 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910137' 00:05:51.312 killing process with pid 2910137 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2910137 00:05:51.312 11:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2910137 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2910251 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2910251 ']' 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2910251 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910251 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.244 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910251' 00:05:52.244 killing process with pid 2910251 00:05:52.245 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2910251 00:05:52.245 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2910251 00:05:52.508 00:05:52.508 real 0m3.216s 00:05:52.508 user 0m3.371s 00:05:52.508 sys 0m1.011s 00:05:52.508 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.508 11:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.508 ************************************ 00:05:52.508 END TEST non_locking_app_on_locked_coremask 00:05:52.508 ************************************ 00:05:52.791 11:32:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.791 11:32:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.791 11:32:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.791 11:32:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.791 11:32:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.791 ************************************ 00:05:52.791 START TEST locking_app_on_unlocked_coremask 00:05:52.791 ************************************ 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2910571 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2910571 /var/tmp/spdk.sock 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2910571 ']' 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.791 11:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.791 [2024-07-15 11:32:00.593391] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:52.791 [2024-07-15 11:32:00.593477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910571 ] 00:05:52.791 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.791 [2024-07-15 11:32:00.652878] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.791 [2024-07-15 11:32:00.652916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.054 [2024-07-15 11:32:00.766569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2910586 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2910586 /var/tmp/spdk2.sock 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2910586 ']' 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.054 11:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.312 [2024-07-15 11:32:01.055183] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:53.312 [2024-07-15 11:32:01.055270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910586 ] 00:05:53.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.312 [2024-07-15 11:32:01.138896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.568 [2024-07-15 11:32:01.352946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.133 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.133 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.133 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2910586 00:05:54.133 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2910586 00:05:54.133 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.698 lslocks: write error 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2910571 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2910571 ']' 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2910571 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910571 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910571' 00:05:54.698 killing process with pid 2910571 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2910571 00:05:54.698 11:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2910571 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2910586 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2910586 ']' 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2910586 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910586 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910586' 00:05:55.658 killing process with pid 2910586 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2910586 00:05:55.658 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2910586 00:05:55.916 00:05:55.916 real 0m3.284s 00:05:55.916 user 0m3.477s 00:05:55.916 sys 0m1.003s 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.916 ************************************ 00:05:55.916 END TEST locking_app_on_unlocked_coremask 00:05:55.916 ************************************ 00:05:55.916 11:32:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.916 11:32:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.916 11:32:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.916 11:32:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.916 11:32:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.916 ************************************ 00:05:55.916 START TEST locking_app_on_locked_coremask 00:05:55.916 ************************************ 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2911008 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2911008 /var/tmp/spdk.sock 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2911008 ']' 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.916 11:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 [2024-07-15 11:32:03.933597] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:56.173 [2024-07-15 11:32:03.933681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911008 ] 00:05:56.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.173 [2024-07-15 11:32:03.990985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.173 [2024-07-15 11:32:04.102236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2911027 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2911027 /var/tmp/spdk2.sock 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2911027 /var/tmp/spdk2.sock 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2911027 /var/tmp/spdk2.sock 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2911027 ']' 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.431 11:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.431 [2024-07-15 11:32:04.385536] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:56.431 [2024-07-15 11:32:04.385611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911027 ] 00:05:56.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.688 [2024-07-15 11:32:04.470585] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2911008 has claimed it. 00:05:56.688 [2024-07-15 11:32:04.470626] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2911027) - No such process 00:05:57.251 ERROR: process (pid: 2911027) is no longer running 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2911008 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2911008 00:05:57.251 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.508 lslocks: write error 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2911008 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2911008 ']' 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2911008 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911008 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911008' 00:05:57.508 killing process with pid 2911008 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2911008 00:05:57.508 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2911008 00:05:58.071 00:05:58.071 real 0m1.989s 00:05:58.071 user 0m2.159s 00:05:58.071 sys 0m0.622s 00:05:58.071 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.071 11:32:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.071 ************************************ 00:05:58.071 END TEST locking_app_on_locked_coremask 00:05:58.071 ************************************ 00:05:58.071 11:32:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.071 11:32:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.071 11:32:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.071 11:32:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.071 11:32:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.071 ************************************ 00:05:58.071 START TEST locking_overlapped_coremask 00:05:58.071 ************************************ 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2911306 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2911306 /var/tmp/spdk.sock 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2911306 ']' 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.071 11:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.071 [2024-07-15 11:32:05.975421] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:58.071 [2024-07-15 11:32:05.975508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911306 ] 00:05:58.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.071 [2024-07-15 11:32:06.033466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.328 [2024-07-15 11:32:06.147748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.328 [2024-07-15 11:32:06.147829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.328 [2024-07-15 11:32:06.147832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2911321 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2911321 /var/tmp/spdk2.sock 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2911321 /var/tmp/spdk2.sock 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2911321 /var/tmp/spdk2.sock 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2911321 ']' 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.585 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.586 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.586 11:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.586 [2024-07-15 11:32:06.457990] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:58.586 [2024-07-15 11:32:06.458098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911321 ] 00:05:58.586 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.586 [2024-07-15 11:32:06.549602] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2911306 has claimed it. 00:05:58.586 [2024-07-15 11:32:06.549662] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2911321) - No such process 00:05:59.517 ERROR: process (pid: 2911321) is no longer running 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2911306 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2911306 ']' 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2911306 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911306 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911306' 00:05:59.517 killing process with pid 2911306 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2911306 00:05:59.517 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2911306 00:05:59.775 00:05:59.775 real 0m1.720s 00:05:59.775 user 0m4.555s 00:05:59.775 sys 0m0.469s 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.775 ************************************ 00:05:59.775 END TEST locking_overlapped_coremask 00:05:59.775 ************************************ 00:05:59.775 11:32:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.775 11:32:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.775 11:32:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.775 11:32:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.775 11:32:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.775 ************************************ 00:05:59.775 START TEST locking_overlapped_coremask_via_rpc 00:05:59.775 ************************************ 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2911491 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2911491 /var/tmp/spdk.sock 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2911491 ']' 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.775 11:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.775 [2024-07-15 11:32:07.741777] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:59.775 [2024-07-15 11:32:07.741891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911491 ] 00:06:00.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.033 [2024-07-15 11:32:07.803698] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.033 [2024-07-15 11:32:07.803760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.033 [2024-07-15 11:32:07.914267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.033 [2024-07-15 11:32:07.914371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.033 [2024-07-15 11:32:07.914380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2911612 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2911612 /var/tmp/spdk2.sock 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2911612 ']' 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.291 11:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.291 [2024-07-15 11:32:08.226251] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:00.291 [2024-07-15 11:32:08.226345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911612 ] 00:06:00.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.549 [2024-07-15 11:32:08.313536] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.549 [2024-07-15 11:32:08.313579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.806 [2024-07-15 11:32:08.536994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.806 [2024-07-15 11:32:08.537044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.806 [2024-07-15 11:32:08.537047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.372 [2024-07-15 11:32:09.172832] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2911491 has claimed it. 00:06:01.372 request: 00:06:01.372 { 00:06:01.372 "method": "framework_enable_cpumask_locks", 00:06:01.372 "req_id": 1 00:06:01.372 } 00:06:01.372 Got JSON-RPC error response 00:06:01.372 response: 00:06:01.372 { 00:06:01.372 "code": -32603, 00:06:01.372 "message": "Failed to claim CPU core: 2" 00:06:01.372 } 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2911491 /var/tmp/spdk.sock 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2911491 ']' 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.372 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.629 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.629 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.629 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2911612 /var/tmp/spdk2.sock 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2911612 ']' 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.630 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.887 00:06:01.887 real 0m1.983s 00:06:01.887 user 0m1.011s 00:06:01.887 sys 0m0.179s 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.887 11:32:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.887 ************************************ 00:06:01.887 END TEST locking_overlapped_coremask_via_rpc 00:06:01.887 ************************************ 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.887 11:32:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.887 11:32:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2911491 ]] 00:06:01.887 11:32:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2911491 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2911491 ']' 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2911491 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911491 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911491' 00:06:01.887 killing process with pid 2911491 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2911491 00:06:01.887 11:32:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2911491 00:06:02.452 11:32:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2911612 ]] 00:06:02.452 11:32:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2911612 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2911612 ']' 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2911612 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911612 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911612' 00:06:02.452 killing process with pid 2911612 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2911612 00:06:02.452 11:32:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2911612 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2911491 ]] 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2911491 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2911491 ']' 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2911491 00:06:02.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2911491) - No such process 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2911491 is not found' 00:06:02.711 Process with pid 2911491 is not found 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2911612 ]] 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2911612 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2911612 ']' 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2911612 00:06:02.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2911612) - No such process 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2911612 is not found' 00:06:02.711 Process with pid 2911612 is not found 00:06:02.711 11:32:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.711 00:06:02.711 real 0m15.980s 00:06:02.711 user 0m27.763s 00:06:02.711 sys 0m5.211s 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.711 11:32:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 ************************************ 00:06:02.711 END TEST cpu_locks 00:06:02.711 ************************************ 00:06:02.711 11:32:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.711 00:06:02.711 real 0m39.923s 00:06:02.711 user 1m15.678s 00:06:02.711 sys 0m9.225s 00:06:02.711 11:32:10 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.711 11:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 ************************************ 00:06:02.711 END TEST event 00:06:02.711 ************************************ 00:06:02.971 11:32:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.971 11:32:10 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.971 11:32:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.971 11:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.971 11:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:02.971 ************************************ 00:06:02.971 START TEST thread 00:06:02.971 ************************************ 00:06:02.971 11:32:10 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.971 * Looking for test storage... 00:06:02.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.971 11:32:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.971 11:32:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.971 11:32:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.971 11:32:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.971 ************************************ 00:06:02.971 START TEST thread_poller_perf 00:06:02.971 ************************************ 00:06:02.971 11:32:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.971 [2024-07-15 11:32:10.823621] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:02.971 [2024-07-15 11:32:10.823691] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911981 ] 00:06:02.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.971 [2024-07-15 11:32:10.880008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.229 [2024-07-15 11:32:10.987594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.229 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.163 ====================================== 00:06:04.163 busy:2710059933 (cyc) 00:06:04.163 total_run_count: 362000 00:06:04.163 tsc_hz: 2700000000 (cyc) 00:06:04.163 ====================================== 00:06:04.163 poller_cost: 7486 (cyc), 2772 (nsec) 00:06:04.163 00:06:04.163 real 0m1.296s 00:06:04.163 user 0m1.221s 00:06:04.163 sys 0m0.069s 00:06:04.163 11:32:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.163 11:32:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.163 ************************************ 00:06:04.163 END TEST thread_poller_perf 00:06:04.163 ************************************ 00:06:04.163 11:32:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.163 11:32:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.163 11:32:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.163 11:32:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.163 11:32:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.422 ************************************ 00:06:04.422 START TEST thread_poller_perf 00:06:04.422 ************************************ 00:06:04.422 11:32:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.422 [2024-07-15 11:32:12.168104] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:04.422 [2024-07-15 11:32:12.168176] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912137 ] 00:06:04.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.422 [2024-07-15 11:32:12.226172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.422 [2024-07-15 11:32:12.343466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.422 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.797 ====================================== 00:06:05.797 busy:2702490927 (cyc) 00:06:05.797 total_run_count: 4890000 00:06:05.797 tsc_hz: 2700000000 (cyc) 00:06:05.797 ====================================== 00:06:05.797 poller_cost: 552 (cyc), 204 (nsec) 00:06:05.797 00:06:05.797 real 0m1.308s 00:06:05.797 user 0m1.227s 00:06:05.797 sys 0m0.075s 00:06:05.797 11:32:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.797 11:32:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 END TEST thread_poller_perf 00:06:05.797 ************************************ 00:06:05.797 11:32:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.797 11:32:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.797 00:06:05.797 real 0m2.754s 00:06:05.797 user 0m2.512s 00:06:05.797 sys 0m0.242s 00:06:05.797 11:32:13 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.797 11:32:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 END TEST thread 00:06:05.797 ************************************ 00:06:05.797 11:32:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.797 11:32:13 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.797 11:32:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.797 11:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.797 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 START TEST accel 00:06:05.797 ************************************ 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:05.797 * Looking for test storage... 00:06:05.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:05.797 11:32:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:05.797 11:32:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:05.797 11:32:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.797 11:32:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2912457 00:06:05.797 11:32:13 accel -- accel/accel.sh@63 -- # waitforlisten 2912457 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@829 -- # '[' -z 2912457 ']' 00:06:05.797 11:32:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.797 11:32:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.797 11:32:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.797 11:32:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.797 11:32:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.797 11:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 11:32:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.797 11:32:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.797 11:32:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.797 11:32:13 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.797 [2024-07-15 11:32:13.632282] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:05.797 [2024-07-15 11:32:13.632374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912457 ] 00:06:05.797 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.798 [2024-07-15 11:32:13.697549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.056 [2024-07-15 11:32:13.811909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.313 11:32:14 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.313 11:32:14 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.313 11:32:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.313 11:32:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.313 11:32:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.313 11:32:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.313 11:32:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.313 11:32:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:06.313 11:32:14 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.314 11:32:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.314 11:32:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.314 11:32:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.314 11:32:14 accel -- accel/accel.sh@75 -- # killprocess 2912457 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@948 -- # '[' -z 2912457 ']' 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@952 -- # kill -0 2912457 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@953 -- # uname 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912457 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912457' 00:06:06.314 killing process with pid 2912457 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@967 -- # kill 2912457 00:06:06.314 11:32:14 accel -- common/autotest_common.sh@972 -- # wait 2912457 00:06:06.881 11:32:14 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:06.881 11:32:14 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.881 11:32:14 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:06.881 11:32:14 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:06.881 11:32:14 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.881 11:32:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.881 11:32:14 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.881 11:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.881 ************************************ 00:06:06.881 START TEST accel_missing_filename 00:06:06.881 ************************************ 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.881 11:32:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:06.882 11:32:14 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:06.882 [2024-07-15 11:32:14.672814] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:06.882 [2024-07-15 11:32:14.672890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912624 ] 00:06:06.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.882 [2024-07-15 11:32:14.731450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.882 [2024-07-15 11:32:14.836133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.140 [2024-07-15 11:32:14.894469] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.140 [2024-07-15 11:32:14.977207] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:07.140 A filename is required. 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.140 00:06:07.140 real 0m0.435s 00:06:07.140 user 0m0.325s 00:06:07.140 sys 0m0.144s 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.140 11:32:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:07.140 ************************************ 00:06:07.140 END TEST accel_missing_filename 00:06:07.140 ************************************ 00:06:07.140 11:32:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.140 11:32:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.140 11:32:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.140 11:32:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.141 11:32:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.399 ************************************ 00:06:07.399 START TEST accel_compress_verify 00:06:07.399 ************************************ 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.399 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.399 11:32:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.399 [2024-07-15 11:32:15.159704] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:07.399 [2024-07-15 11:32:15.159781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912646 ] 00:06:07.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.399 [2024-07-15 11:32:15.217127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.399 [2024-07-15 11:32:15.321258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.399 [2024-07-15 11:32:15.378153] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.657 [2024-07-15 11:32:15.462078] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:07.657 00:06:07.657 Compression does not support the verify option, aborting. 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.657 00:06:07.657 real 0m0.437s 00:06:07.657 user 0m0.339s 00:06:07.657 sys 0m0.133s 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.657 11:32:15 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.657 ************************************ 00:06:07.657 END TEST accel_compress_verify 00:06:07.657 ************************************ 00:06:07.657 11:32:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.657 11:32:15 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:07.657 11:32:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.657 11:32:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.657 11:32:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.657 ************************************ 00:06:07.657 START TEST accel_wrong_workload 00:06:07.657 ************************************ 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.657 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:07.657 11:32:15 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:07.916 Unsupported workload type: foobar 00:06:07.916 [2024-07-15 11:32:15.644502] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:07.916 accel_perf options: 00:06:07.916 [-h help message] 00:06:07.916 [-q queue depth per core] 00:06:07.916 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.916 [-T number of threads per core 00:06:07.916 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.916 [-t time in seconds] 00:06:07.916 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.916 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.916 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.916 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.916 [-S for crc32c workload, use this seed value (default 0) 00:06:07.916 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.916 [-f for fill workload, use this BYTE value (default 255) 00:06:07.916 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.916 [-y verify result if this switch is on] 00:06:07.917 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.917 Can be used to spread operations across a wider range of memory. 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.917 00:06:07.917 real 0m0.024s 00:06:07.917 user 0m0.015s 00:06:07.917 sys 0m0.009s 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.917 11:32:15 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:07.917 ************************************ 00:06:07.917 END TEST accel_wrong_workload 00:06:07.917 ************************************ 00:06:07.917 Error: writing output failed: Broken pipe 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.917 11:32:15 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.917 ************************************ 00:06:07.917 START TEST accel_negative_buffers 00:06:07.917 ************************************ 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:07.917 11:32:15 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:07.917 -x option must be non-negative. 00:06:07.917 [2024-07-15 11:32:15.711044] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:07.917 accel_perf options: 00:06:07.917 [-h help message] 00:06:07.917 [-q queue depth per core] 00:06:07.917 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.917 [-T number of threads per core 00:06:07.917 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.917 [-t time in seconds] 00:06:07.917 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.917 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.917 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.917 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.917 [-S for crc32c workload, use this seed value (default 0) 00:06:07.917 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.917 [-f for fill workload, use this BYTE value (default 255) 00:06:07.917 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.917 [-y verify result if this switch is on] 00:06:07.917 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.917 Can be used to spread operations across a wider range of memory. 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.917 00:06:07.917 real 0m0.022s 00:06:07.917 user 0m0.007s 00:06:07.917 sys 0m0.015s 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.917 11:32:15 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:07.917 ************************************ 00:06:07.917 END TEST accel_negative_buffers 00:06:07.917 ************************************ 00:06:07.917 Error: writing output failed: Broken pipe 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.917 11:32:15 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.917 11:32:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.917 ************************************ 00:06:07.917 START TEST accel_crc32c 00:06:07.917 ************************************ 00:06:07.917 11:32:15 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:07.917 11:32:15 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:07.917 [2024-07-15 11:32:15.773647] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:07.917 [2024-07-15 11:32:15.773710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912833 ] 00:06:07.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.917 [2024-07-15 11:32:15.833776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.175 [2024-07-15 11:32:15.943809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.176 11:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:09.550 11:32:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.550 00:06:09.550 real 0m1.447s 00:06:09.550 user 0m1.309s 00:06:09.550 sys 0m0.140s 00:06:09.550 11:32:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.550 11:32:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:09.550 ************************************ 00:06:09.550 END TEST accel_crc32c 00:06:09.550 ************************************ 00:06:09.550 11:32:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.550 11:32:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:09.550 11:32:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:09.550 11:32:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.550 11:32:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.550 ************************************ 00:06:09.550 START TEST accel_crc32c_C2 00:06:09.550 ************************************ 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:09.550 [2024-07-15 11:32:17.270615] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:09.550 [2024-07-15 11:32:17.270677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912989 ] 00:06:09.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.550 [2024-07-15 11:32:17.327419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.550 [2024-07-15 11:32:17.433259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.550 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.551 11:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.924 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.925 00:06:10.925 real 0m1.438s 00:06:10.925 user 0m1.304s 00:06:10.925 sys 0m0.137s 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.925 11:32:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:10.925 ************************************ 00:06:10.925 END TEST accel_crc32c_C2 00:06:10.925 ************************************ 00:06:10.925 11:32:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.925 11:32:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:10.925 11:32:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.925 11:32:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.925 11:32:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.925 ************************************ 00:06:10.925 START TEST accel_copy 00:06:10.925 ************************************ 00:06:10.925 11:32:18 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:10.925 11:32:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:10.925 [2024-07-15 11:32:18.762748] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:10.925 [2024-07-15 11:32:18.762827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913146 ] 00:06:10.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.925 [2024-07-15 11:32:18.820798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.183 [2024-07-15 11:32:18.927010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.183 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.183 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.184 11:32:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:12.559 11:32:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.559 00:06:12.559 real 0m1.444s 00:06:12.559 user 0m1.301s 00:06:12.559 sys 0m0.145s 00:06:12.559 11:32:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.559 11:32:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.559 ************************************ 00:06:12.559 END TEST accel_copy 00:06:12.559 ************************************ 00:06:12.559 11:32:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.559 11:32:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.559 11:32:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.559 11:32:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.559 11:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.559 ************************************ 00:06:12.559 START TEST accel_fill 00:06:12.559 ************************************ 00:06:12.559 11:32:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:12.559 [2024-07-15 11:32:20.247196] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:12.559 [2024-07-15 11:32:20.247265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913418 ] 00:06:12.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.559 [2024-07-15 11:32:20.306136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.559 [2024-07-15 11:32:20.414731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.559 11:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:13.934 11:32:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.934 00:06:13.934 real 0m1.434s 00:06:13.934 user 0m1.297s 00:06:13.934 sys 0m0.139s 00:06:13.934 11:32:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.934 11:32:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 END TEST accel_fill 00:06:13.934 ************************************ 00:06:13.934 11:32:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.934 11:32:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:13.934 11:32:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.934 11:32:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.934 11:32:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 START TEST accel_copy_crc32c 00:06:13.934 ************************************ 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:13.934 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:13.934 [2024-07-15 11:32:21.733090] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:13.934 [2024-07-15 11:32:21.733155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913581 ] 00:06:13.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.934 [2024-07-15 11:32:21.791849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.934 [2024-07-15 11:32:21.896703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.211 11:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.593 00:06:15.593 real 0m1.441s 00:06:15.593 user 0m1.303s 00:06:15.593 sys 0m0.141s 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.593 11:32:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.593 ************************************ 00:06:15.593 END TEST accel_copy_crc32c 00:06:15.593 ************************************ 00:06:15.593 11:32:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.593 11:32:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.593 11:32:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.593 11:32:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.593 11:32:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.593 ************************************ 00:06:15.593 START TEST accel_copy_crc32c_C2 00:06:15.593 ************************************ 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:15.593 [2024-07-15 11:32:23.224641] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:15.593 [2024-07-15 11:32:23.224706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913737 ] 00:06:15.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.593 [2024-07-15 11:32:23.282592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.593 [2024-07-15 11:32:23.387511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.593 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.594 11:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.969 00:06:16.969 real 0m1.433s 00:06:16.969 user 0m1.304s 00:06:16.969 sys 0m0.131s 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.969 11:32:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:16.969 ************************************ 00:06:16.969 END TEST accel_copy_crc32c_C2 00:06:16.969 ************************************ 00:06:16.969 11:32:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.969 11:32:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:16.969 11:32:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.969 11:32:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.969 11:32:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.969 ************************************ 00:06:16.969 START TEST accel_dualcast 00:06:16.969 ************************************ 00:06:16.969 11:32:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:16.969 [2024-07-15 11:32:24.708733] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:16.969 [2024-07-15 11:32:24.708829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914004 ] 00:06:16.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.969 [2024-07-15 11:32:24.766714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.969 [2024-07-15 11:32:24.875817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.969 11:32:24 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.970 11:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:18.346 11:32:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.346 00:06:18.346 real 0m1.431s 00:06:18.346 user 0m1.294s 00:06:18.346 sys 0m0.138s 00:06:18.346 11:32:26 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.346 11:32:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:18.346 ************************************ 00:06:18.346 END TEST accel_dualcast 00:06:18.346 ************************************ 00:06:18.346 11:32:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.346 11:32:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:18.346 11:32:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.346 11:32:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.346 11:32:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.346 ************************************ 00:06:18.346 START TEST accel_compare 00:06:18.346 ************************************ 00:06:18.346 11:32:26 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:18.346 11:32:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:18.346 [2024-07-15 11:32:26.191065] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:18.346 [2024-07-15 11:32:26.191129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914171 ] 00:06:18.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.346 [2024-07-15 11:32:26.250933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.604 [2024-07-15 11:32:26.356394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.604 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.604 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.604 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.604 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.605 11:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.977 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:19.978 11:32:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.978 00:06:19.978 real 0m1.442s 00:06:19.978 user 0m1.308s 00:06:19.978 sys 0m0.136s 00:06:19.978 11:32:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.978 11:32:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:19.978 ************************************ 00:06:19.978 END TEST accel_compare 00:06:19.978 ************************************ 00:06:19.978 11:32:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.978 11:32:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:19.978 11:32:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.978 11:32:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.978 11:32:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.978 ************************************ 00:06:19.978 START TEST accel_xor 00:06:19.978 ************************************ 00:06:19.978 11:32:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:19.978 [2024-07-15 11:32:27.684452] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:19.978 [2024-07-15 11:32:27.684516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914329 ] 00:06:19.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.978 [2024-07-15 11:32:27.741945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.978 [2024-07-15 11:32:27.846281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.978 11:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.347 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.348 00:06:21.348 real 0m1.437s 00:06:21.348 user 0m1.297s 00:06:21.348 sys 0m0.143s 00:06:21.348 11:32:29 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.348 11:32:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:21.348 ************************************ 00:06:21.348 END TEST accel_xor 00:06:21.348 ************************************ 00:06:21.348 11:32:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.348 11:32:29 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:21.348 11:32:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.348 11:32:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.348 11:32:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.348 ************************************ 00:06:21.348 START TEST accel_xor 00:06:21.348 ************************************ 00:06:21.348 11:32:29 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.348 11:32:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.348 [2024-07-15 11:32:29.169900] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:21.348 [2024-07-15 11:32:29.169975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914547 ] 00:06:21.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.348 [2024-07-15 11:32:29.231037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.605 [2024-07-15 11:32:29.337268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.605 11:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:22.976 11:32:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.976 00:06:22.976 real 0m1.420s 00:06:22.976 user 0m1.295s 00:06:22.976 sys 0m0.127s 00:06:22.976 11:32:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.976 11:32:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:22.976 ************************************ 00:06:22.976 END TEST accel_xor 00:06:22.976 ************************************ 00:06:22.976 11:32:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.976 11:32:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:22.976 11:32:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.976 11:32:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.976 11:32:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.976 ************************************ 00:06:22.976 START TEST accel_dif_verify 00:06:22.976 ************************************ 00:06:22.976 11:32:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:22.976 [2024-07-15 11:32:30.640984] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:22.976 [2024-07-15 11:32:30.641057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914759 ] 00:06:22.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.976 [2024-07-15 11:32:30.698047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.976 [2024-07-15 11:32:30.802986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.976 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.977 11:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:24.351 11:32:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.351 00:06:24.351 real 0m1.430s 00:06:24.351 user 0m1.305s 00:06:24.351 sys 0m0.128s 00:06:24.351 11:32:32 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.351 11:32:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.351 ************************************ 00:06:24.351 END TEST accel_dif_verify 00:06:24.351 ************************************ 00:06:24.351 11:32:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.351 11:32:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.351 11:32:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.351 11:32:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.351 11:32:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.351 ************************************ 00:06:24.351 START TEST accel_dif_generate 00:06:24.351 ************************************ 00:06:24.351 11:32:32 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:24.351 11:32:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:24.351 [2024-07-15 11:32:32.124513] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:24.351 [2024-07-15 11:32:32.124576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914917 ] 00:06:24.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.351 [2024-07-15 11:32:32.182305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.351 [2024-07-15 11:32:32.284190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.609 11:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:25.981 11:32:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.981 00:06:25.981 real 0m1.435s 00:06:25.981 user 0m1.301s 00:06:25.981 sys 0m0.137s 00:06:25.981 11:32:33 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.981 11:32:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:25.981 ************************************ 00:06:25.981 END TEST accel_dif_generate 00:06:25.981 ************************************ 00:06:25.981 11:32:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.981 11:32:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:25.981 11:32:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.981 11:32:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.981 11:32:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.981 ************************************ 00:06:25.981 START TEST accel_dif_generate_copy 00:06:25.981 ************************************ 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.981 [2024-07-15 11:32:33.609433] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:25.981 [2024-07-15 11:32:33.609497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915069 ] 00:06:25.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.981 [2024-07-15 11:32:33.667841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.981 [2024-07-15 11:32:33.780519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.981 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.982 11:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.356 00:06:27.356 real 0m1.445s 00:06:27.356 user 0m1.295s 00:06:27.356 sys 0m0.152s 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.356 11:32:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.356 ************************************ 00:06:27.356 END TEST accel_dif_generate_copy 00:06:27.356 ************************************ 00:06:27.356 11:32:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.356 11:32:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:27.356 11:32:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.356 11:32:35 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.356 11:32:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.356 11:32:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.356 ************************************ 00:06:27.356 START TEST accel_comp 00:06:27.356 ************************************ 00:06:27.356 11:32:35 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:27.356 [2024-07-15 11:32:35.101328] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:27.356 [2024-07-15 11:32:35.101389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915347 ] 00:06:27.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.356 [2024-07-15 11:32:35.158557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.356 [2024-07-15 11:32:35.262862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.356 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 11:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:28.730 11:32:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.730 00:06:28.730 real 0m1.429s 00:06:28.730 user 0m1.299s 00:06:28.730 sys 0m0.133s 00:06:28.730 11:32:36 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.730 11:32:36 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:28.730 ************************************ 00:06:28.730 END TEST accel_comp 00:06:28.730 ************************************ 00:06:28.730 11:32:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.730 11:32:36 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.730 11:32:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:28.730 11:32:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.730 11:32:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.730 ************************************ 00:06:28.730 START TEST accel_decomp 00:06:28.730 ************************************ 00:06:28.730 11:32:36 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.730 11:32:36 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.731 11:32:36 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.731 11:32:36 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:28.731 11:32:36 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:28.731 [2024-07-15 11:32:36.584384] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:28.731 [2024-07-15 11:32:36.584448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915504 ] 00:06:28.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.731 [2024-07-15 11:32:36.642991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.989 [2024-07-15 11:32:36.749667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.989 11:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.362 11:32:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.362 00:06:30.362 real 0m1.443s 00:06:30.362 user 0m1.305s 00:06:30.362 sys 0m0.140s 00:06:30.362 11:32:38 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.362 11:32:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:30.362 ************************************ 00:06:30.362 END TEST accel_decomp 00:06:30.362 ************************************ 00:06:30.362 11:32:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.362 11:32:38 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.362 11:32:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:30.362 11:32:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.362 11:32:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.362 ************************************ 00:06:30.362 START TEST accel_decomp_full 00:06:30.362 ************************************ 00:06:30.362 11:32:38 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:30.362 [2024-07-15 11:32:38.077883] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:30.362 [2024-07-15 11:32:38.077946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915663 ] 00:06:30.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.362 [2024-07-15 11:32:38.134193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.362 [2024-07-15 11:32:38.240849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.362 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.363 11:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.736 11:32:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.736 00:06:31.736 real 0m1.442s 00:06:31.736 user 0m1.314s 00:06:31.736 sys 0m0.130s 00:06:31.736 11:32:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.736 11:32:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:31.736 ************************************ 00:06:31.736 END TEST accel_decomp_full 00:06:31.736 ************************************ 00:06:31.736 11:32:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.736 11:32:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.736 11:32:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.736 11:32:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.736 11:32:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.736 ************************************ 00:06:31.736 START TEST accel_decomp_mcore 00:06:31.736 ************************************ 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.736 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:31.737 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:31.737 [2024-07-15 11:32:39.568593] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:31.737 [2024-07-15 11:32:39.568655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915929 ] 00:06:31.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.737 [2024-07-15 11:32:39.627347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.995 [2024-07-15 11:32:39.739931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.995 [2024-07-15 11:32:39.739991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.995 [2024-07-15 11:32:39.740059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.995 [2024-07-15 11:32:39.740062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:31.995 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.996 11:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.373 00:06:33.373 real 0m1.441s 00:06:33.373 user 0m4.707s 00:06:33.373 sys 0m0.141s 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.373 11:32:40 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:33.373 ************************************ 00:06:33.373 END TEST accel_decomp_mcore 00:06:33.373 ************************************ 00:06:33.373 11:32:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.373 11:32:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.373 11:32:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:33.373 11:32:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.373 11:32:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.373 ************************************ 00:06:33.373 START TEST accel_decomp_full_mcore 00:06:33.373 ************************************ 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.373 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.373 [2024-07-15 11:32:41.054022] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:33.373 [2024-07-15 11:32:41.054084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916100 ] 00:06:33.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.373 [2024-07-15 11:32:41.112435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.373 [2024-07-15 11:32:41.220292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.373 [2024-07-15 11:32:41.220355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.374 [2024-07-15 11:32:41.220420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.374 [2024-07-15 11:32:41.220423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.374 11:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.747 00:06:34.747 real 0m1.471s 00:06:34.747 user 0m4.822s 00:06:34.747 sys 0m0.152s 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.747 11:32:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:34.747 ************************************ 00:06:34.747 END TEST accel_decomp_full_mcore 00:06:34.747 ************************************ 00:06:34.747 11:32:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.747 11:32:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.747 11:32:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.747 11:32:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.747 11:32:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.747 ************************************ 00:06:34.747 START TEST accel_decomp_mthread 00:06:34.748 ************************************ 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:34.748 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:34.748 [2024-07-15 11:32:42.573337] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:34.748 [2024-07-15 11:32:42.573401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916255 ] 00:06:34.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.748 [2024-07-15 11:32:42.630989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.006 [2024-07-15 11:32:42.737693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.006 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.007 11:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.381 11:32:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.381 11:32:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.381 00:06:36.381 real 0m1.447s 00:06:36.381 user 0m1.309s 00:06:36.381 sys 0m0.140s 00:06:36.381 11:32:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.381 11:32:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:36.381 ************************************ 00:06:36.381 END TEST accel_decomp_mthread 00:06:36.381 ************************************ 00:06:36.381 11:32:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.381 11:32:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.381 11:32:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:36.381 11:32:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.381 11:32:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.381 ************************************ 00:06:36.381 START TEST accel_decomp_full_mthread 00:06:36.381 ************************************ 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.381 [2024-07-15 11:32:44.067612] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:36.381 [2024-07-15 11:32:44.067677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916529 ] 00:06:36.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.381 [2024-07-15 11:32:44.125287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.381 [2024-07-15 11:32:44.228895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.381 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.382 11:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.756 00:06:37.756 real 0m1.461s 00:06:37.756 user 0m1.325s 00:06:37.756 sys 0m0.138s 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.756 11:32:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:37.756 ************************************ 00:06:37.756 END TEST accel_decomp_full_mthread 00:06:37.756 ************************************ 00:06:37.756 11:32:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.756 11:32:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.756 11:32:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.756 11:32:45 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:37.756 11:32:45 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:37.756 11:32:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.756 11:32:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.756 11:32:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.756 11:32:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.756 11:32:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.756 11:32:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.756 11:32:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.756 11:32:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.756 11:32:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.756 ************************************ 00:06:37.756 START TEST accel_dif_functional_tests 00:06:37.756 ************************************ 00:06:37.756 11:32:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.756 [2024-07-15 11:32:45.597121] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:37.756 [2024-07-15 11:32:45.597180] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916687 ] 00:06:37.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.756 [2024-07-15 11:32:45.653427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.014 [2024-07-15 11:32:45.764304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.014 [2024-07-15 11:32:45.764358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.014 [2024-07-15 11:32:45.764363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.014 00:06:38.014 00:06:38.014 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.014 http://cunit.sourceforge.net/ 00:06:38.014 00:06:38.014 00:06:38.014 Suite: accel_dif 00:06:38.014 Test: verify: DIF generated, GUARD check ...passed 00:06:38.014 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.014 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.014 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:32:45.861901] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.014 passed 00:06:38.014 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:32:45.861975] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.014 passed 00:06:38.014 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:32:45.862008] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.014 passed 00:06:38.014 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.014 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:32:45.862101] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.014 passed 00:06:38.014 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.014 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.014 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.014 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:32:45.862239] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.014 passed 00:06:38.014 Test: verify copy: DIF generated, GUARD check ...passed 00:06:38.014 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:38.014 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:38.014 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:32:45.862404] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.014 passed 00:06:38.014 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:32:45.862440] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.014 passed 00:06:38.014 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:32:45.862474] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.014 passed 00:06:38.014 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.015 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.015 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.015 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.015 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.015 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.015 Test: generate copy: iovecs-len validate ...[2024-07-15 11:32:45.862694] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.015 passed 00:06:38.015 Test: generate copy: buffer alignment validate ...passed 00:06:38.015 00:06:38.015 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.015 suites 1 1 n/a 0 0 00:06:38.015 tests 26 26 26 0 0 00:06:38.015 asserts 115 115 115 0 n/a 00:06:38.015 00:06:38.015 Elapsed time = 0.005 seconds 00:06:38.273 00:06:38.273 real 0m0.552s 00:06:38.273 user 0m0.849s 00:06:38.273 sys 0m0.179s 00:06:38.273 11:32:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.273 11:32:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:38.273 ************************************ 00:06:38.273 END TEST accel_dif_functional_tests 00:06:38.273 ************************************ 00:06:38.273 11:32:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.273 00:06:38.273 real 0m32.607s 00:06:38.273 user 0m36.112s 00:06:38.273 sys 0m4.482s 00:06:38.273 11:32:46 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.273 11:32:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.273 ************************************ 00:06:38.273 END TEST accel 00:06:38.273 ************************************ 00:06:38.273 11:32:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.273 11:32:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.273 11:32:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.273 11:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.273 11:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:38.273 ************************************ 00:06:38.273 START TEST accel_rpc 00:06:38.273 ************************************ 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.273 * Looking for test storage... 00:06:38.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:38.273 11:32:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.273 11:32:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2916838 00:06:38.273 11:32:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.273 11:32:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2916838 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2916838 ']' 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.273 11:32:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.531 [2024-07-15 11:32:46.292393] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:38.531 [2024-07-15 11:32:46.292479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916838 ] 00:06:38.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.531 [2024-07-15 11:32:46.349149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.531 [2024-07-15 11:32:46.456087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.531 11:32:46 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.531 11:32:46 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.531 11:32:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:38.531 11:32:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:38.531 11:32:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:38.531 11:32:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:38.531 11:32:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:38.531 11:32:46 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.531 11:32:46 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.531 11:32:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.790 ************************************ 00:06:38.790 START TEST accel_assign_opcode 00:06:38.790 ************************************ 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.790 [2024-07-15 11:32:46.524734] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.790 [2024-07-15 11:32:46.532744] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:38.790 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.052 software 00:06:39.052 00:06:39.052 real 0m0.266s 00:06:39.052 user 0m0.033s 00:06:39.052 sys 0m0.009s 00:06:39.052 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.052 11:32:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.052 ************************************ 00:06:39.052 END TEST accel_assign_opcode 00:06:39.052 ************************************ 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:39.052 11:32:46 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2916838 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2916838 ']' 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2916838 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2916838 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2916838' 00:06:39.052 killing process with pid 2916838 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@967 -- # kill 2916838 00:06:39.052 11:32:46 accel_rpc -- common/autotest_common.sh@972 -- # wait 2916838 00:06:39.331 00:06:39.331 real 0m1.090s 00:06:39.331 user 0m1.044s 00:06:39.331 sys 0m0.399s 00:06:39.331 11:32:47 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.331 11:32:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.331 ************************************ 00:06:39.331 END TEST accel_rpc 00:06:39.331 ************************************ 00:06:39.331 11:32:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.331 11:32:47 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.331 11:32:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.331 11:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.331 11:32:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.590 ************************************ 00:06:39.590 START TEST app_cmdline 00:06:39.590 ************************************ 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.590 * Looking for test storage... 00:06:39.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.590 11:32:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.590 11:32:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2917047 00:06:39.590 11:32:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.590 11:32:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2917047 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2917047 ']' 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.590 11:32:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.590 [2024-07-15 11:32:47.431086] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:39.590 [2024-07-15 11:32:47.431188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917047 ] 00:06:39.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.590 [2024-07-15 11:32:47.487657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.847 [2024-07-15 11:32:47.595690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.105 11:32:47 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.105 11:32:47 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:40.105 11:32:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:40.105 { 00:06:40.105 "version": "SPDK v24.09-pre git sha1 e7cce062d", 00:06:40.105 "fields": { 00:06:40.105 "major": 24, 00:06:40.105 "minor": 9, 00:06:40.105 "patch": 0, 00:06:40.105 "suffix": "-pre", 00:06:40.105 "commit": "e7cce062d" 00:06:40.105 } 00:06:40.105 } 00:06:40.105 11:32:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.105 11:32:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.105 11:32:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.105 11:32:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.361 11:32:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.362 11:32:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.362 11:32:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.362 11:32:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.362 11:32:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.362 11:32:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:40.362 11:32:48 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.619 request: 00:06:40.619 { 00:06:40.619 "method": "env_dpdk_get_mem_stats", 00:06:40.619 "req_id": 1 00:06:40.619 } 00:06:40.619 Got JSON-RPC error response 00:06:40.619 response: 00:06:40.619 { 00:06:40.619 "code": -32601, 00:06:40.619 "message": "Method not found" 00:06:40.619 } 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.619 11:32:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2917047 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2917047 ']' 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2917047 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2917047 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2917047' 00:06:40.619 killing process with pid 2917047 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@967 -- # kill 2917047 00:06:40.619 11:32:48 app_cmdline -- common/autotest_common.sh@972 -- # wait 2917047 00:06:41.185 00:06:41.185 real 0m1.564s 00:06:41.185 user 0m1.958s 00:06:41.185 sys 0m0.449s 00:06:41.185 11:32:48 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.185 11:32:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.185 ************************************ 00:06:41.185 END TEST app_cmdline 00:06:41.185 ************************************ 00:06:41.185 11:32:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.185 11:32:48 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:41.185 11:32:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.185 11:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.185 11:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.185 ************************************ 00:06:41.185 START TEST version 00:06:41.185 ************************************ 00:06:41.185 11:32:48 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:41.185 * Looking for test storage... 00:06:41.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.185 11:32:48 version -- app/version.sh@17 -- # get_header_version major 00:06:41.185 11:32:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.185 11:32:48 version -- app/version.sh@14 -- # cut -f2 00:06:41.185 11:32:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.185 11:32:48 version -- app/version.sh@17 -- # major=24 00:06:41.185 11:32:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.185 11:32:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.185 11:32:48 version -- app/version.sh@14 -- # cut -f2 00:06:41.185 11:32:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.185 11:32:48 version -- app/version.sh@18 -- # minor=9 00:06:41.185 11:32:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.185 11:32:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.185 11:32:49 version -- app/version.sh@14 -- # cut -f2 00:06:41.185 11:32:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.185 11:32:49 version -- app/version.sh@19 -- # patch=0 00:06:41.185 11:32:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.185 11:32:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.185 11:32:49 version -- app/version.sh@14 -- # cut -f2 00:06:41.185 11:32:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.185 11:32:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.185 11:32:49 version -- app/version.sh@22 -- # version=24.9 00:06:41.185 11:32:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.185 11:32:49 version -- app/version.sh@28 -- # version=24.9rc0 00:06:41.185 11:32:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:41.185 11:32:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.185 11:32:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:41.185 11:32:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:41.185 00:06:41.185 real 0m0.101s 00:06:41.185 user 0m0.047s 00:06:41.185 sys 0m0.075s 00:06:41.185 11:32:49 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.185 11:32:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.185 ************************************ 00:06:41.185 END TEST version 00:06:41.185 ************************************ 00:06:41.185 11:32:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.185 11:32:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@198 -- # uname -s 00:06:41.185 11:32:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:41.185 11:32:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.185 11:32:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.185 11:32:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:41.185 11:32:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.185 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:41.185 11:32:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:41.185 11:32:49 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:41.185 11:32:49 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.185 11:32:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.185 11:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.185 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:41.185 ************************************ 00:06:41.185 START TEST nvmf_tcp 00:06:41.185 ************************************ 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.186 * Looking for test storage... 00:06:41.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.186 11:32:49 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.186 11:32:49 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.186 11:32:49 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.186 11:32:49 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.186 11:32:49 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.186 11:32:49 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.186 11:32:49 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:41.186 11:32:49 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:41.186 11:32:49 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.186 11:32:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.444 ************************************ 00:06:41.444 START TEST nvmf_example 00:06:41.444 ************************************ 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.444 * Looking for test storage... 00:06:41.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.444 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.445 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.445 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:41.445 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:41.445 11:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:41.445 11:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.971 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:43.972 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:43.972 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:43.972 Found net devices under 0000:84:00.0: cvl_0_0 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:43.972 Found net devices under 0000:84:00.1: cvl_0_1 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:43.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:06:43.972 00:06:43.972 --- 10.0.0.2 ping statistics --- 00:06:43.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.972 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:06:43.972 00:06:43.972 --- 10.0.0.1 ping statistics --- 00:06:43.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.972 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2919004 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2919004 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2919004 ']' 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.972 11:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:44.904 11:32:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:44.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.873 Initializing NVMe Controllers 00:06:54.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.873 Initialization complete. Launching workers. 00:06:54.873 ======================================================== 00:06:54.873 Latency(us) 00:06:54.873 Device Information : IOPS MiB/s Average min max 00:06:54.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14945.59 58.38 4283.09 662.76 51986.76 00:06:54.873 ======================================================== 00:06:54.873 Total : 14945.59 58.38 4283.09 662.76 51986.76 00:06:54.873 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.131 11:33:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.131 rmmod nvme_tcp 00:06:55.131 rmmod nvme_fabrics 00:06:55.131 rmmod nvme_keyring 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2919004 ']' 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2919004 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2919004 ']' 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2919004 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919004 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919004' 00:06:55.131 killing process with pid 2919004 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2919004 00:06:55.131 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2919004 00:06:55.390 nvmf threads initialize successfully 00:06:55.390 bdev subsystem init successfully 00:06:55.390 created a nvmf target service 00:06:55.390 create targets's poll groups done 00:06:55.390 all subsystems of target started 00:06:55.390 nvmf target is running 00:06:55.390 all subsystems of target stopped 00:06:55.390 destroy targets's poll groups done 00:06:55.390 destroyed the nvmf target service 00:06:55.390 bdev subsystem finish successfully 00:06:55.390 nvmf threads destroy successfully 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.390 11:33:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.931 00:06:57.931 real 0m16.189s 00:06:57.931 user 0m45.451s 00:06:57.931 sys 0m3.711s 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.931 11:33:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.931 ************************************ 00:06:57.931 END TEST nvmf_example 00:06:57.931 ************************************ 00:06:57.931 11:33:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:57.931 11:33:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.931 11:33:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.931 11:33:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.931 11:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.931 ************************************ 00:06:57.931 START TEST nvmf_filesystem 00:06:57.931 ************************************ 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.931 * Looking for test storage... 00:06:57.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:57.931 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:57.932 #define SPDK_CONFIG_H 00:06:57.932 #define SPDK_CONFIG_APPS 1 00:06:57.932 #define SPDK_CONFIG_ARCH native 00:06:57.932 #undef SPDK_CONFIG_ASAN 00:06:57.932 #undef SPDK_CONFIG_AVAHI 00:06:57.932 #undef SPDK_CONFIG_CET 00:06:57.932 #define SPDK_CONFIG_COVERAGE 1 00:06:57.932 #define SPDK_CONFIG_CROSS_PREFIX 00:06:57.932 #undef SPDK_CONFIG_CRYPTO 00:06:57.932 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:57.932 #undef SPDK_CONFIG_CUSTOMOCF 00:06:57.932 #undef SPDK_CONFIG_DAOS 00:06:57.932 #define SPDK_CONFIG_DAOS_DIR 00:06:57.932 #define SPDK_CONFIG_DEBUG 1 00:06:57.932 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:57.932 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:57.932 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:57.932 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:57.932 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:57.932 #undef SPDK_CONFIG_DPDK_UADK 00:06:57.932 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.932 #define SPDK_CONFIG_EXAMPLES 1 00:06:57.932 #undef SPDK_CONFIG_FC 00:06:57.932 #define SPDK_CONFIG_FC_PATH 00:06:57.932 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:57.932 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:57.932 #undef SPDK_CONFIG_FUSE 00:06:57.932 #undef SPDK_CONFIG_FUZZER 00:06:57.932 #define SPDK_CONFIG_FUZZER_LIB 00:06:57.932 #undef SPDK_CONFIG_GOLANG 00:06:57.932 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:57.932 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:57.932 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:57.932 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:57.932 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:57.932 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:57.932 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:57.932 #define SPDK_CONFIG_IDXD 1 00:06:57.932 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:57.932 #undef SPDK_CONFIG_IPSEC_MB 00:06:57.932 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:57.932 #define SPDK_CONFIG_ISAL 1 00:06:57.932 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:57.932 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:57.932 #define SPDK_CONFIG_LIBDIR 00:06:57.932 #undef SPDK_CONFIG_LTO 00:06:57.932 #define SPDK_CONFIG_MAX_LCORES 128 00:06:57.932 #define SPDK_CONFIG_NVME_CUSE 1 00:06:57.932 #undef SPDK_CONFIG_OCF 00:06:57.932 #define SPDK_CONFIG_OCF_PATH 00:06:57.932 #define SPDK_CONFIG_OPENSSL_PATH 00:06:57.932 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:57.932 #define SPDK_CONFIG_PGO_DIR 00:06:57.932 #undef SPDK_CONFIG_PGO_USE 00:06:57.932 #define SPDK_CONFIG_PREFIX /usr/local 00:06:57.932 #undef SPDK_CONFIG_RAID5F 00:06:57.932 #undef SPDK_CONFIG_RBD 00:06:57.932 #define SPDK_CONFIG_RDMA 1 00:06:57.932 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:57.932 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:57.932 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:57.932 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:57.932 #define SPDK_CONFIG_SHARED 1 00:06:57.932 #undef SPDK_CONFIG_SMA 00:06:57.932 #define SPDK_CONFIG_TESTS 1 00:06:57.932 #undef SPDK_CONFIG_TSAN 00:06:57.932 #define SPDK_CONFIG_UBLK 1 00:06:57.932 #define SPDK_CONFIG_UBSAN 1 00:06:57.932 #undef SPDK_CONFIG_UNIT_TESTS 00:06:57.932 #undef SPDK_CONFIG_URING 00:06:57.932 #define SPDK_CONFIG_URING_PATH 00:06:57.932 #undef SPDK_CONFIG_URING_ZNS 00:06:57.932 #undef SPDK_CONFIG_USDT 00:06:57.932 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:57.932 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:57.932 #define SPDK_CONFIG_VFIO_USER 1 00:06:57.932 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:57.932 #define SPDK_CONFIG_VHOST 1 00:06:57.932 #define SPDK_CONFIG_VIRTIO 1 00:06:57.932 #undef SPDK_CONFIG_VTUNE 00:06:57.932 #define SPDK_CONFIG_VTUNE_DIR 00:06:57.932 #define SPDK_CONFIG_WERROR 1 00:06:57.932 #define SPDK_CONFIG_WPDK_DIR 00:06:57.932 #undef SPDK_CONFIG_XNVME 00:06:57.932 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.932 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:57.933 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2920827 ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2920827 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.jBZCOR 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jBZCOR/tests/target /tmp/spdk.jBZCOR 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:57.934 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39043788800 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6039523328 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22538280960 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541045760 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=610304 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:57.935 * Looking for test storage... 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=39043788800 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8254115840 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.935 11:33:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.936 11:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.836 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:59.837 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:59.837 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:59.837 Found net devices under 0000:84:00.0: cvl_0_0 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:59.837 Found net devices under 0000:84:00.1: cvl_0_1 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:59.837 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:00.096 00:07:00.096 --- 10.0.0.2 ping statistics --- 00:07:00.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.096 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:07:00.096 00:07:00.096 --- 10.0.0.1 ping statistics --- 00:07:00.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.096 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.096 ************************************ 00:07:00.096 START TEST nvmf_filesystem_no_in_capsule 00:07:00.096 ************************************ 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2922640 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2922640 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2922640 ']' 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.096 11:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.096 [2024-07-15 11:33:07.951651] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:00.096 [2024-07-15 11:33:07.951747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.096 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.096 [2024-07-15 11:33:08.037633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.354 [2024-07-15 11:33:08.179223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.354 [2024-07-15 11:33:08.179301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.354 [2024-07-15 11:33:08.179325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.354 [2024-07-15 11:33:08.179345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.354 [2024-07-15 11:33:08.179364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.354 [2024-07-15 11:33:08.179455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.354 [2024-07-15 11:33:08.179538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.354 [2024-07-15 11:33:08.179610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.354 [2024-07-15 11:33:08.179602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.354 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.354 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:00.354 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.354 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.354 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.612 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.612 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:00.612 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:00.612 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.612 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 [2024-07-15 11:33:08.353493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 [2024-07-15 11:33:08.533080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:00.613 { 00:07:00.613 "name": "Malloc1", 00:07:00.613 "aliases": [ 00:07:00.613 "658787a7-b4b2-48cb-910a-f4e09f317ffb" 00:07:00.613 ], 00:07:00.613 "product_name": "Malloc disk", 00:07:00.613 "block_size": 512, 00:07:00.613 "num_blocks": 1048576, 00:07:00.613 "uuid": "658787a7-b4b2-48cb-910a-f4e09f317ffb", 00:07:00.613 "assigned_rate_limits": { 00:07:00.613 "rw_ios_per_sec": 0, 00:07:00.613 "rw_mbytes_per_sec": 0, 00:07:00.613 "r_mbytes_per_sec": 0, 00:07:00.613 "w_mbytes_per_sec": 0 00:07:00.613 }, 00:07:00.613 "claimed": true, 00:07:00.613 "claim_type": "exclusive_write", 00:07:00.613 "zoned": false, 00:07:00.613 "supported_io_types": { 00:07:00.613 "read": true, 00:07:00.613 "write": true, 00:07:00.613 "unmap": true, 00:07:00.613 "flush": true, 00:07:00.613 "reset": true, 00:07:00.613 "nvme_admin": false, 00:07:00.613 "nvme_io": false, 00:07:00.613 "nvme_io_md": false, 00:07:00.613 "write_zeroes": true, 00:07:00.613 "zcopy": true, 00:07:00.613 "get_zone_info": false, 00:07:00.613 "zone_management": false, 00:07:00.613 "zone_append": false, 00:07:00.613 "compare": false, 00:07:00.613 "compare_and_write": false, 00:07:00.613 "abort": true, 00:07:00.613 "seek_hole": false, 00:07:00.613 "seek_data": false, 00:07:00.613 "copy": true, 00:07:00.613 "nvme_iov_md": false 00:07:00.613 }, 00:07:00.613 "memory_domains": [ 00:07:00.613 { 00:07:00.613 "dma_device_id": "system", 00:07:00.613 "dma_device_type": 1 00:07:00.613 }, 00:07:00.613 { 00:07:00.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.613 "dma_device_type": 2 00:07:00.613 } 00:07:00.613 ], 00:07:00.613 "driver_specific": {} 00:07:00.613 } 00:07:00.613 ]' 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:00.613 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:00.871 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:00.871 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:00.871 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:00.871 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:00.871 11:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.435 11:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:01.435 11:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:01.435 11:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:01.435 11:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:01.435 11:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:03.960 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:03.961 11:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:04.526 11:33:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.458 ************************************ 00:07:05.458 START TEST filesystem_ext4 00:07:05.458 ************************************ 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:05.458 11:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:05.458 mke2fs 1.46.5 (30-Dec-2021) 00:07:05.458 Discarding device blocks: 0/522240 done 00:07:05.458 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:05.458 Filesystem UUID: 0cbb8780-5cc8-4eb7-abbe-257d79d37108 00:07:05.458 Superblock backups stored on blocks: 00:07:05.458 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:05.458 00:07:05.458 Allocating group tables: 0/64 done 00:07:05.458 Writing inode tables: 0/64 done 00:07:06.023 Creating journal (8192 blocks): done 00:07:06.846 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:06.846 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:06.846 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2922640 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.103 00:07:07.103 real 0m1.551s 00:07:07.103 user 0m0.023s 00:07:07.103 sys 0m0.043s 00:07:07.103 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.104 ************************************ 00:07:07.104 END TEST filesystem_ext4 00:07:07.104 ************************************ 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.104 ************************************ 00:07:07.104 START TEST filesystem_btrfs 00:07:07.104 ************************************ 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.104 11:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.361 btrfs-progs v6.6.2 00:07:07.361 See https://btrfs.readthedocs.io for more information. 00:07:07.361 00:07:07.361 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.361 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.361 this does not affect your deployments: 00:07:07.361 - DUP for metadata (-m dup) 00:07:07.361 - enabled no-holes (-O no-holes) 00:07:07.361 - enabled free-space-tree (-R free-space-tree) 00:07:07.361 00:07:07.361 Label: (null) 00:07:07.361 UUID: 80548560-fab2-4c08-bd6e-046a8bc3c870 00:07:07.361 Node size: 16384 00:07:07.361 Sector size: 4096 00:07:07.361 Filesystem size: 510.00MiB 00:07:07.361 Block group profiles: 00:07:07.361 Data: single 8.00MiB 00:07:07.361 Metadata: DUP 32.00MiB 00:07:07.361 System: DUP 8.00MiB 00:07:07.361 SSD detected: yes 00:07:07.361 Zoned device: no 00:07:07.361 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.361 Runtime features: free-space-tree 00:07:07.361 Checksum: crc32c 00:07:07.361 Number of devices: 1 00:07:07.361 Devices: 00:07:07.361 ID SIZE PATH 00:07:07.361 1 510.00MiB /dev/nvme0n1p1 00:07:07.361 00:07:07.361 11:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:07.361 11:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2922640 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.291 00:07:08.291 real 0m1.306s 00:07:08.291 user 0m0.020s 00:07:08.291 sys 0m0.110s 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:08.291 ************************************ 00:07:08.291 END TEST filesystem_btrfs 00:07:08.291 ************************************ 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.291 ************************************ 00:07:08.291 START TEST filesystem_xfs 00:07:08.291 ************************************ 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.291 11:33:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:08.548 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:08.548 = sectsz=512 attr=2, projid32bit=1 00:07:08.548 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:08.548 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:08.548 data = bsize=4096 blocks=130560, imaxpct=25 00:07:08.548 = sunit=0 swidth=0 blks 00:07:08.548 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:08.548 log =internal log bsize=4096 blocks=16384, version=2 00:07:08.548 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:08.548 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.111 Discarding blocks...Done. 00:07:09.111 11:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.111 11:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.713 00:07:11.713 real 0m2.961s 00:07:11.713 user 0m0.016s 00:07:11.713 sys 0m0.061s 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:11.713 ************************************ 00:07:11.713 END TEST filesystem_xfs 00:07:11.713 ************************************ 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2922640 ']' 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922640' 00:07:11.713 killing process with pid 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2922640 00:07:11.713 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2922640 00:07:11.973 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.973 00:07:11.973 real 0m12.034s 00:07:11.973 user 0m45.890s 00:07:11.973 sys 0m1.914s 00:07:11.973 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.973 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.973 ************************************ 00:07:11.973 END TEST nvmf_filesystem_no_in_capsule 00:07:11.973 ************************************ 00:07:11.973 11:33:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:11.973 11:33:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:12.231 11:33:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.231 11:33:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.231 11:33:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.231 ************************************ 00:07:12.231 START TEST nvmf_filesystem_in_capsule 00:07:12.231 ************************************ 00:07:12.231 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2924657 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2924657 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2924657 ']' 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.232 11:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.232 [2024-07-15 11:33:20.042752] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:12.232 [2024-07-15 11:33:20.042858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.232 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.232 [2024-07-15 11:33:20.109806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.232 [2024-07-15 11:33:20.213543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.232 [2024-07-15 11:33:20.213597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.232 [2024-07-15 11:33:20.213616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.232 [2024-07-15 11:33:20.213627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.232 [2024-07-15 11:33:20.213642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.232 [2024-07-15 11:33:20.213718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.232 [2024-07-15 11:33:20.213786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.232 [2024-07-15 11:33:20.213850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.232 [2024-07-15 11:33:20.213853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.489 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.489 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:12.489 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.490 [2024-07-15 11:33:20.374565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.490 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 Malloc1 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 [2024-07-15 11:33:20.548229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.748 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:12.748 { 00:07:12.748 "name": "Malloc1", 00:07:12.748 "aliases": [ 00:07:12.748 "d2f10f3e-1875-43c0-a723-ebeeea26dfdb" 00:07:12.748 ], 00:07:12.748 "product_name": "Malloc disk", 00:07:12.748 "block_size": 512, 00:07:12.748 "num_blocks": 1048576, 00:07:12.748 "uuid": "d2f10f3e-1875-43c0-a723-ebeeea26dfdb", 00:07:12.748 "assigned_rate_limits": { 00:07:12.749 "rw_ios_per_sec": 0, 00:07:12.749 "rw_mbytes_per_sec": 0, 00:07:12.749 "r_mbytes_per_sec": 0, 00:07:12.749 "w_mbytes_per_sec": 0 00:07:12.749 }, 00:07:12.749 "claimed": true, 00:07:12.749 "claim_type": "exclusive_write", 00:07:12.749 "zoned": false, 00:07:12.749 "supported_io_types": { 00:07:12.749 "read": true, 00:07:12.749 "write": true, 00:07:12.749 "unmap": true, 00:07:12.749 "flush": true, 00:07:12.749 "reset": true, 00:07:12.749 "nvme_admin": false, 00:07:12.749 "nvme_io": false, 00:07:12.749 "nvme_io_md": false, 00:07:12.749 "write_zeroes": true, 00:07:12.749 "zcopy": true, 00:07:12.749 "get_zone_info": false, 00:07:12.749 "zone_management": false, 00:07:12.749 "zone_append": false, 00:07:12.749 "compare": false, 00:07:12.749 "compare_and_write": false, 00:07:12.749 "abort": true, 00:07:12.749 "seek_hole": false, 00:07:12.749 "seek_data": false, 00:07:12.749 "copy": true, 00:07:12.749 "nvme_iov_md": false 00:07:12.749 }, 00:07:12.749 "memory_domains": [ 00:07:12.749 { 00:07:12.749 "dma_device_id": "system", 00:07:12.749 "dma_device_type": 1 00:07:12.749 }, 00:07:12.749 { 00:07:12.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.749 "dma_device_type": 2 00:07:12.749 } 00:07:12.749 ], 00:07:12.749 "driver_specific": {} 00:07:12.749 } 00:07:12.749 ]' 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:12.749 11:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.681 11:33:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.681 11:33:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:13.681 11:33:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.681 11:33:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:13.681 11:33:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.578 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.836 11:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:16.768 11:33:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.700 ************************************ 00:07:17.700 START TEST filesystem_in_capsule_ext4 00:07:17.700 ************************************ 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:17.700 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:17.701 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:17.701 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:17.701 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:17.701 11:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:17.701 mke2fs 1.46.5 (30-Dec-2021) 00:07:17.959 Discarding device blocks: 0/522240 done 00:07:17.959 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:17.959 Filesystem UUID: a1516055-5abc-4b13-8fac-c01b02d3248d 00:07:17.959 Superblock backups stored on blocks: 00:07:17.959 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:17.959 00:07:17.959 Allocating group tables: 0/64 done 00:07:17.959 Writing inode tables: 0/64 done 00:07:18.216 Creating journal (8192 blocks): done 00:07:18.780 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:18.780 00:07:18.780 11:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:18.781 11:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2924657 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.346 00:07:19.346 real 0m1.554s 00:07:19.346 user 0m0.019s 00:07:19.346 sys 0m0.061s 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 ************************************ 00:07:19.346 END TEST filesystem_in_capsule_ext4 00:07:19.346 ************************************ 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 ************************************ 00:07:19.346 START TEST filesystem_in_capsule_btrfs 00:07:19.346 ************************************ 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:19.346 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:19.604 btrfs-progs v6.6.2 00:07:19.604 See https://btrfs.readthedocs.io for more information. 00:07:19.604 00:07:19.604 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:19.604 NOTE: several default settings have changed in version 5.15, please make sure 00:07:19.604 this does not affect your deployments: 00:07:19.604 - DUP for metadata (-m dup) 00:07:19.604 - enabled no-holes (-O no-holes) 00:07:19.604 - enabled free-space-tree (-R free-space-tree) 00:07:19.604 00:07:19.604 Label: (null) 00:07:19.604 UUID: 712c8c15-1204-48f2-8ded-d79bb5b0a493 00:07:19.604 Node size: 16384 00:07:19.604 Sector size: 4096 00:07:19.604 Filesystem size: 510.00MiB 00:07:19.604 Block group profiles: 00:07:19.604 Data: single 8.00MiB 00:07:19.604 Metadata: DUP 32.00MiB 00:07:19.604 System: DUP 8.00MiB 00:07:19.604 SSD detected: yes 00:07:19.604 Zoned device: no 00:07:19.604 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:19.604 Runtime features: free-space-tree 00:07:19.604 Checksum: crc32c 00:07:19.604 Number of devices: 1 00:07:19.604 Devices: 00:07:19.604 ID SIZE PATH 00:07:19.604 1 510.00MiB /dev/nvme0n1p1 00:07:19.604 00:07:19.604 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:19.604 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2924657 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.862 00:07:19.862 real 0m0.604s 00:07:19.862 user 0m0.022s 00:07:19.862 sys 0m0.107s 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.862 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.862 ************************************ 00:07:19.862 END TEST filesystem_in_capsule_btrfs 00:07:19.862 ************************************ 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.120 ************************************ 00:07:20.120 START TEST filesystem_in_capsule_xfs 00:07:20.120 ************************************ 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.120 11:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:20.120 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:20.120 = sectsz=512 attr=2, projid32bit=1 00:07:20.120 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:20.120 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:20.120 data = bsize=4096 blocks=130560, imaxpct=25 00:07:20.120 = sunit=0 swidth=0 blks 00:07:20.120 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:20.120 log =internal log bsize=4096 blocks=16384, version=2 00:07:20.120 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:20.120 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:21.051 Discarding blocks...Done. 00:07:21.051 11:33:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:21.051 11:33:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:22.948 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2924657 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.206 00:07:23.206 real 0m3.067s 00:07:23.206 user 0m0.019s 00:07:23.206 sys 0m0.056s 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:23.206 ************************************ 00:07:23.206 END TEST filesystem_in_capsule_xfs 00:07:23.206 ************************************ 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:23.206 11:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2924657 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2924657 ']' 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2924657 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2924657 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2924657' 00:07:23.465 killing process with pid 2924657 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2924657 00:07:23.465 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2924657 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:24.032 00:07:24.032 real 0m11.875s 00:07:24.032 user 0m45.425s 00:07:24.032 sys 0m1.813s 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.032 ************************************ 00:07:24.032 END TEST nvmf_filesystem_in_capsule 00:07:24.032 ************************************ 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.032 rmmod nvme_tcp 00:07:24.032 rmmod nvme_fabrics 00:07:24.032 rmmod nvme_keyring 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.032 11:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.591 11:33:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.591 00:07:26.591 real 0m28.580s 00:07:26.591 user 1m32.227s 00:07:26.591 sys 0m5.479s 00:07:26.591 11:33:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.591 11:33:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.591 ************************************ 00:07:26.591 END TEST nvmf_filesystem 00:07:26.591 ************************************ 00:07:26.591 11:33:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.591 11:33:34 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:26.591 11:33:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.591 11:33:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.591 11:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.591 ************************************ 00:07:26.591 START TEST nvmf_target_discovery 00:07:26.591 ************************************ 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:26.591 * Looking for test storage... 00:07:26.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.591 11:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:28.493 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:28.493 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:28.493 Found net devices under 0000:84:00.0: cvl_0_0 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:28.493 Found net devices under 0000:84:00.1: cvl_0_1 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.493 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:28.494 00:07:28.494 --- 10.0.0.2 ping statistics --- 00:07:28.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.494 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:28.494 00:07:28.494 --- 10.0.0.1 ping statistics --- 00:07:28.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.494 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2928160 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2928160 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2928160 ']' 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.494 11:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.494 [2024-07-15 11:33:36.454469] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:28.494 [2024-07-15 11:33:36.454553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.751 [2024-07-15 11:33:36.527985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.751 [2024-07-15 11:33:36.640377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.751 [2024-07-15 11:33:36.640425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.751 [2024-07-15 11:33:36.640455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.751 [2024-07-15 11:33:36.640466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.751 [2024-07-15 11:33:36.640476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.751 [2024-07-15 11:33:36.640531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.751 [2024-07-15 11:33:36.643755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.751 [2024-07-15 11:33:36.643822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.751 [2024-07-15 11:33:36.643827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 [2024-07-15 11:33:37.449649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 Null1 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 [2024-07-15 11:33:37.489916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 Null2 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 Null3 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 Null4 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.680 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:07:29.936 00:07:29.937 Discovery Log Number of Records 6, Generation counter 6 00:07:29.937 =====Discovery Log Entry 0====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: current discovery subsystem 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4420 00:07:29.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: explicit discovery connections, duplicate discovery information 00:07:29.937 sectype: none 00:07:29.937 =====Discovery Log Entry 1====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: nvme subsystem 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4420 00:07:29.937 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: none 00:07:29.937 sectype: none 00:07:29.937 =====Discovery Log Entry 2====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: nvme subsystem 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4420 00:07:29.937 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: none 00:07:29.937 sectype: none 00:07:29.937 =====Discovery Log Entry 3====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: nvme subsystem 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4420 00:07:29.937 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: none 00:07:29.937 sectype: none 00:07:29.937 =====Discovery Log Entry 4====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: nvme subsystem 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4420 00:07:29.937 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: none 00:07:29.937 sectype: none 00:07:29.937 =====Discovery Log Entry 5====== 00:07:29.937 trtype: tcp 00:07:29.937 adrfam: ipv4 00:07:29.937 subtype: discovery subsystem referral 00:07:29.937 treq: not required 00:07:29.937 portid: 0 00:07:29.937 trsvcid: 4430 00:07:29.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:29.937 traddr: 10.0.0.2 00:07:29.937 eflags: none 00:07:29.937 sectype: none 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:29.937 Perform nvmf subsystem discovery via RPC 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 [ 00:07:29.937 { 00:07:29.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:29.937 "subtype": "Discovery", 00:07:29.937 "listen_addresses": [ 00:07:29.937 { 00:07:29.937 "trtype": "TCP", 00:07:29.937 "adrfam": "IPv4", 00:07:29.937 "traddr": "10.0.0.2", 00:07:29.937 "trsvcid": "4420" 00:07:29.937 } 00:07:29.937 ], 00:07:29.937 "allow_any_host": true, 00:07:29.937 "hosts": [] 00:07:29.937 }, 00:07:29.937 { 00:07:29.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:29.937 "subtype": "NVMe", 00:07:29.937 "listen_addresses": [ 00:07:29.937 { 00:07:29.937 "trtype": "TCP", 00:07:29.937 "adrfam": "IPv4", 00:07:29.937 "traddr": "10.0.0.2", 00:07:29.937 "trsvcid": "4420" 00:07:29.937 } 00:07:29.937 ], 00:07:29.937 "allow_any_host": true, 00:07:29.937 "hosts": [], 00:07:29.937 "serial_number": "SPDK00000000000001", 00:07:29.937 "model_number": "SPDK bdev Controller", 00:07:29.937 "max_namespaces": 32, 00:07:29.937 "min_cntlid": 1, 00:07:29.937 "max_cntlid": 65519, 00:07:29.937 "namespaces": [ 00:07:29.937 { 00:07:29.937 "nsid": 1, 00:07:29.937 "bdev_name": "Null1", 00:07:29.937 "name": "Null1", 00:07:29.937 "nguid": "BD986B426668439F818FB50CD458DC0E", 00:07:29.937 "uuid": "bd986b42-6668-439f-818f-b50cd458dc0e" 00:07:29.937 } 00:07:29.937 ] 00:07:29.937 }, 00:07:29.937 { 00:07:29.937 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:29.937 "subtype": "NVMe", 00:07:29.937 "listen_addresses": [ 00:07:29.937 { 00:07:29.937 "trtype": "TCP", 00:07:29.937 "adrfam": "IPv4", 00:07:29.937 "traddr": "10.0.0.2", 00:07:29.937 "trsvcid": "4420" 00:07:29.937 } 00:07:29.937 ], 00:07:29.937 "allow_any_host": true, 00:07:29.937 "hosts": [], 00:07:29.937 "serial_number": "SPDK00000000000002", 00:07:29.937 "model_number": "SPDK bdev Controller", 00:07:29.937 "max_namespaces": 32, 00:07:29.937 "min_cntlid": 1, 00:07:29.937 "max_cntlid": 65519, 00:07:29.937 "namespaces": [ 00:07:29.937 { 00:07:29.937 "nsid": 1, 00:07:29.937 "bdev_name": "Null2", 00:07:29.937 "name": "Null2", 00:07:29.937 "nguid": "5B01427C48BC41629411B33F141DE4A9", 00:07:29.937 "uuid": "5b01427c-48bc-4162-9411-b33f141de4a9" 00:07:29.937 } 00:07:29.937 ] 00:07:29.937 }, 00:07:29.937 { 00:07:29.937 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:29.937 "subtype": "NVMe", 00:07:29.937 "listen_addresses": [ 00:07:29.937 { 00:07:29.937 "trtype": "TCP", 00:07:29.937 "adrfam": "IPv4", 00:07:29.937 "traddr": "10.0.0.2", 00:07:29.937 "trsvcid": "4420" 00:07:29.937 } 00:07:29.937 ], 00:07:29.937 "allow_any_host": true, 00:07:29.937 "hosts": [], 00:07:29.937 "serial_number": "SPDK00000000000003", 00:07:29.937 "model_number": "SPDK bdev Controller", 00:07:29.937 "max_namespaces": 32, 00:07:29.937 "min_cntlid": 1, 00:07:29.937 "max_cntlid": 65519, 00:07:29.937 "namespaces": [ 00:07:29.937 { 00:07:29.937 "nsid": 1, 00:07:29.937 "bdev_name": "Null3", 00:07:29.937 "name": "Null3", 00:07:29.937 "nguid": "5E42969CDA6F476EAD5B69A8A90697B9", 00:07:29.937 "uuid": "5e42969c-da6f-476e-ad5b-69a8a90697b9" 00:07:29.937 } 00:07:29.937 ] 00:07:29.937 }, 00:07:29.937 { 00:07:29.937 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:29.937 "subtype": "NVMe", 00:07:29.937 "listen_addresses": [ 00:07:29.937 { 00:07:29.937 "trtype": "TCP", 00:07:29.937 "adrfam": "IPv4", 00:07:29.937 "traddr": "10.0.0.2", 00:07:29.937 "trsvcid": "4420" 00:07:29.937 } 00:07:29.937 ], 00:07:29.937 "allow_any_host": true, 00:07:29.937 "hosts": [], 00:07:29.937 "serial_number": "SPDK00000000000004", 00:07:29.937 "model_number": "SPDK bdev Controller", 00:07:29.937 "max_namespaces": 32, 00:07:29.937 "min_cntlid": 1, 00:07:29.937 "max_cntlid": 65519, 00:07:29.937 "namespaces": [ 00:07:29.937 { 00:07:29.937 "nsid": 1, 00:07:29.937 "bdev_name": "Null4", 00:07:29.937 "name": "Null4", 00:07:29.937 "nguid": "616D6C39A9404CC2BBBE01B6B6BE66B7", 00:07:29.937 "uuid": "616d6c39-a940-4cc2-bbbe-01b6b6be66b7" 00:07:29.937 } 00:07:29.937 ] 00:07:29.937 } 00:07:29.937 ] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.937 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.938 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.938 rmmod nvme_tcp 00:07:30.194 rmmod nvme_fabrics 00:07:30.194 rmmod nvme_keyring 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2928160 ']' 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2928160 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2928160 ']' 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2928160 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2928160 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2928160' 00:07:30.194 killing process with pid 2928160 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2928160 00:07:30.194 11:33:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2928160 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.453 11:33:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.358 11:33:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.358 00:07:32.358 real 0m6.255s 00:07:32.358 user 0m7.399s 00:07:32.358 sys 0m1.964s 00:07:32.358 11:33:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.358 11:33:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.358 ************************************ 00:07:32.358 END TEST nvmf_target_discovery 00:07:32.358 ************************************ 00:07:32.358 11:33:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:32.358 11:33:40 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:32.358 11:33:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.358 11:33:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.358 11:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.617 ************************************ 00:07:32.617 START TEST nvmf_referrals 00:07:32.617 ************************************ 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:32.617 * Looking for test storage... 00:07:32.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.617 11:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:35.153 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:35.153 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:35.153 Found net devices under 0000:84:00.0: cvl_0_0 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:35.153 Found net devices under 0000:84:00.1: cvl_0_1 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.153 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:07:35.154 00:07:35.154 --- 10.0.0.2 ping statistics --- 00:07:35.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.154 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:07:35.154 00:07:35.154 --- 10.0.0.1 ping statistics --- 00:07:35.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.154 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2930394 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2930394 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2930394 ']' 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.154 11:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.154 [2024-07-15 11:33:42.799942] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:35.154 [2024-07-15 11:33:42.800050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.154 [2024-07-15 11:33:42.866546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.154 [2024-07-15 11:33:42.978719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.154 [2024-07-15 11:33:42.978798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.154 [2024-07-15 11:33:42.978827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.154 [2024-07-15 11:33:42.978838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.154 [2024-07-15 11:33:42.978848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.154 [2024-07-15 11:33:42.978935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.154 [2024-07-15 11:33:42.979000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.154 [2024-07-15 11:33:42.979048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.154 [2024-07-15 11:33:42.979051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.154 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.154 [2024-07-15 11:33:43.136658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 [2024-07-15 11:33:43.148894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.417 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.679 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.971 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:35.971 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.972 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.238 11:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.238 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.497 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.756 rmmod nvme_tcp 00:07:36.756 rmmod nvme_fabrics 00:07:36.756 rmmod nvme_keyring 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2930394 ']' 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2930394 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2930394 ']' 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2930394 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2930394 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2930394' 00:07:36.756 killing process with pid 2930394 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2930394 00:07:36.756 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2930394 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.014 11:33:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.918 11:33:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.177 00:07:39.177 real 0m6.548s 00:07:39.177 user 0m8.800s 00:07:39.177 sys 0m2.207s 00:07:39.177 11:33:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.177 11:33:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.177 ************************************ 00:07:39.177 END TEST nvmf_referrals 00:07:39.177 ************************************ 00:07:39.177 11:33:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:39.177 11:33:46 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:39.177 11:33:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.177 11:33:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.177 11:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.177 ************************************ 00:07:39.177 START TEST nvmf_connect_disconnect 00:07:39.177 ************************************ 00:07:39.177 11:33:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:39.177 * Looking for test storage... 00:07:39.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.177 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.178 11:33:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.710 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:41.711 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:41.711 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:41.711 Found net devices under 0000:84:00.0: cvl_0_0 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:41.711 Found net devices under 0000:84:00.1: cvl_0_1 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:07:41.711 00:07:41.711 --- 10.0.0.2 ping statistics --- 00:07:41.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.711 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:41.711 00:07:41.711 --- 10.0.0.1 ping statistics --- 00:07:41.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.711 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2932702 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2932702 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2932702 ']' 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.711 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.711 [2024-07-15 11:33:49.418315] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:41.711 [2024-07-15 11:33:49.418392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.711 [2024-07-15 11:33:49.482114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.711 [2024-07-15 11:33:49.590528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.711 [2024-07-15 11:33:49.590595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.711 [2024-07-15 11:33:49.590622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.711 [2024-07-15 11:33:49.590634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.711 [2024-07-15 11:33:49.590643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.711 [2024-07-15 11:33:49.590769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.711 [2024-07-15 11:33:49.590865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.711 [2024-07-15 11:33:49.590922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.711 [2024-07-15 11:33:49.590926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.970 [2024-07-15 11:33:49.758825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:41.970 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 [2024-07-15 11:33:49.820132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:41.971 11:33:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:45.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.136 rmmod nvme_tcp 00:07:56.136 rmmod nvme_fabrics 00:07:56.136 rmmod nvme_keyring 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2932702 ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2932702 ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2932702' 00:07:56.136 killing process with pid 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2932702 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.136 11:34:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.043 11:34:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.043 00:07:58.043 real 0m18.966s 00:07:58.043 user 0m56.513s 00:07:58.043 sys 0m3.488s 00:07:58.043 11:34:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.043 11:34:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.043 ************************************ 00:07:58.043 END TEST nvmf_connect_disconnect 00:07:58.043 ************************************ 00:07:58.043 11:34:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:58.043 11:34:05 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:58.043 11:34:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.043 11:34:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.043 11:34:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.043 ************************************ 00:07:58.043 START TEST nvmf_multitarget 00:07:58.043 ************************************ 00:07:58.043 11:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:58.043 * Looking for test storage... 00:07:58.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.043 11:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.043 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.302 11:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.206 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:00.464 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.464 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:00.465 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:00.465 Found net devices under 0000:84:00.0: cvl_0_0 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:00.465 Found net devices under 0000:84:00.1: cvl_0_1 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:00.465 00:08:00.465 --- 10.0.0.2 ping statistics --- 00:08:00.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.465 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:00.465 00:08:00.465 --- 10.0.0.1 ping statistics --- 00:08:00.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.465 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2936379 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2936379 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2936379 ']' 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.465 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:00.465 [2024-07-15 11:34:08.418524] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:00.465 [2024-07-15 11:34:08.418616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.723 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.723 [2024-07-15 11:34:08.488283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.723 [2024-07-15 11:34:08.598292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.723 [2024-07-15 11:34:08.598353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.723 [2024-07-15 11:34:08.598380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.723 [2024-07-15 11:34:08.598392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.723 [2024-07-15 11:34:08.598401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.723 [2024-07-15 11:34:08.600759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.723 [2024-07-15 11:34:08.600829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.723 [2024-07-15 11:34:08.600885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.723 [2024-07-15 11:34:08.600889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:00.980 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:00.980 "nvmf_tgt_1" 00:08:01.237 11:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:01.237 "nvmf_tgt_2" 00:08:01.237 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:01.237 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:01.237 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:01.237 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:01.495 true 00:08:01.495 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:01.495 true 00:08:01.495 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:01.495 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:01.753 rmmod nvme_tcp 00:08:01.753 rmmod nvme_fabrics 00:08:01.753 rmmod nvme_keyring 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2936379 ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2936379 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2936379 ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2936379 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2936379 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2936379' 00:08:01.753 killing process with pid 2936379 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2936379 00:08:01.753 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2936379 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.011 11:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.545 11:34:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.545 00:08:04.545 real 0m5.940s 00:08:04.545 user 0m6.457s 00:08:04.545 sys 0m2.024s 00:08:04.545 11:34:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.545 11:34:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.545 ************************************ 00:08:04.545 END TEST nvmf_multitarget 00:08:04.545 ************************************ 00:08:04.545 11:34:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.545 11:34:11 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:04.545 11:34:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.545 11:34:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.545 11:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.545 ************************************ 00:08:04.545 START TEST nvmf_rpc 00:08:04.545 ************************************ 00:08:04.545 11:34:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:04.545 * Looking for test storage... 00:08:04.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.545 11:34:12 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.546 11:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:06.473 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:06.473 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.473 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:06.474 Found net devices under 0000:84:00.0: cvl_0_0 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:06.474 Found net devices under 0000:84:00.1: cvl_0_1 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:08:06.474 00:08:06.474 --- 10.0.0.2 ping statistics --- 00:08:06.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.474 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:06.474 00:08:06.474 --- 10.0.0.1 ping statistics --- 00:08:06.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.474 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2938616 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2938616 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2938616 ']' 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.474 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.474 [2024-07-15 11:34:14.325462] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:06.474 [2024-07-15 11:34:14.325529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.474 [2024-07-15 11:34:14.386956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.732 [2024-07-15 11:34:14.491104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.732 [2024-07-15 11:34:14.491155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.732 [2024-07-15 11:34:14.491182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.732 [2024-07-15 11:34:14.491194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.732 [2024-07-15 11:34:14.491203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.732 [2024-07-15 11:34:14.491339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.733 [2024-07-15 11:34:14.491405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.733 [2024-07-15 11:34:14.491513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.733 [2024-07-15 11:34:14.491516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:06.733 "tick_rate": 2700000000, 00:08:06.733 "poll_groups": [ 00:08:06.733 { 00:08:06.733 "name": "nvmf_tgt_poll_group_000", 00:08:06.733 "admin_qpairs": 0, 00:08:06.733 "io_qpairs": 0, 00:08:06.733 "current_admin_qpairs": 0, 00:08:06.733 "current_io_qpairs": 0, 00:08:06.733 "pending_bdev_io": 0, 00:08:06.733 "completed_nvme_io": 0, 00:08:06.733 "transports": [] 00:08:06.733 }, 00:08:06.733 { 00:08:06.733 "name": "nvmf_tgt_poll_group_001", 00:08:06.733 "admin_qpairs": 0, 00:08:06.733 "io_qpairs": 0, 00:08:06.733 "current_admin_qpairs": 0, 00:08:06.733 "current_io_qpairs": 0, 00:08:06.733 "pending_bdev_io": 0, 00:08:06.733 "completed_nvme_io": 0, 00:08:06.733 "transports": [] 00:08:06.733 }, 00:08:06.733 { 00:08:06.733 "name": "nvmf_tgt_poll_group_002", 00:08:06.733 "admin_qpairs": 0, 00:08:06.733 "io_qpairs": 0, 00:08:06.733 "current_admin_qpairs": 0, 00:08:06.733 "current_io_qpairs": 0, 00:08:06.733 "pending_bdev_io": 0, 00:08:06.733 "completed_nvme_io": 0, 00:08:06.733 "transports": [] 00:08:06.733 }, 00:08:06.733 { 00:08:06.733 "name": "nvmf_tgt_poll_group_003", 00:08:06.733 "admin_qpairs": 0, 00:08:06.733 "io_qpairs": 0, 00:08:06.733 "current_admin_qpairs": 0, 00:08:06.733 "current_io_qpairs": 0, 00:08:06.733 "pending_bdev_io": 0, 00:08:06.733 "completed_nvme_io": 0, 00:08:06.733 "transports": [] 00:08:06.733 } 00:08:06.733 ] 00:08:06.733 }' 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:06.733 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 [2024-07-15 11:34:14.735841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:06.992 "tick_rate": 2700000000, 00:08:06.992 "poll_groups": [ 00:08:06.992 { 00:08:06.992 "name": "nvmf_tgt_poll_group_000", 00:08:06.992 "admin_qpairs": 0, 00:08:06.992 "io_qpairs": 0, 00:08:06.992 "current_admin_qpairs": 0, 00:08:06.992 "current_io_qpairs": 0, 00:08:06.992 "pending_bdev_io": 0, 00:08:06.992 "completed_nvme_io": 0, 00:08:06.992 "transports": [ 00:08:06.992 { 00:08:06.992 "trtype": "TCP" 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 }, 00:08:06.992 { 00:08:06.992 "name": "nvmf_tgt_poll_group_001", 00:08:06.992 "admin_qpairs": 0, 00:08:06.992 "io_qpairs": 0, 00:08:06.992 "current_admin_qpairs": 0, 00:08:06.992 "current_io_qpairs": 0, 00:08:06.992 "pending_bdev_io": 0, 00:08:06.992 "completed_nvme_io": 0, 00:08:06.992 "transports": [ 00:08:06.992 { 00:08:06.992 "trtype": "TCP" 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 }, 00:08:06.992 { 00:08:06.992 "name": "nvmf_tgt_poll_group_002", 00:08:06.992 "admin_qpairs": 0, 00:08:06.992 "io_qpairs": 0, 00:08:06.992 "current_admin_qpairs": 0, 00:08:06.992 "current_io_qpairs": 0, 00:08:06.992 "pending_bdev_io": 0, 00:08:06.992 "completed_nvme_io": 0, 00:08:06.992 "transports": [ 00:08:06.992 { 00:08:06.992 "trtype": "TCP" 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 }, 00:08:06.992 { 00:08:06.992 "name": "nvmf_tgt_poll_group_003", 00:08:06.992 "admin_qpairs": 0, 00:08:06.992 "io_qpairs": 0, 00:08:06.992 "current_admin_qpairs": 0, 00:08:06.992 "current_io_qpairs": 0, 00:08:06.992 "pending_bdev_io": 0, 00:08:06.992 "completed_nvme_io": 0, 00:08:06.992 "transports": [ 00:08:06.992 { 00:08:06.992 "trtype": "TCP" 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 }' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 Malloc1 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.992 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.993 [2024-07-15 11:34:14.892186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:06.993 [2024-07-15 11:34:14.914654] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:08:06.993 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:06.993 could not add new controller: failed to write to nvme-fabrics device 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.993 11:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:07.928 11:34:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.928 11:34:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:07.928 11:34:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.928 11:34:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:07.928 11:34:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.825 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.083 [2024-07-15 11:34:17.825010] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:08:10.083 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:10.083 could not add new controller: failed to write to nvme-fabrics device 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.083 11:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.650 11:34:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.650 11:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.650 11:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.651 11:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.651 11:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:12.551 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.810 [2024-07-15 11:34:20.661398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.810 11:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.376 11:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.376 11:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.376 11:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.376 11:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:13.376 11:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 [2024-07-15 11:34:23.404981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.912 11:34:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.172 11:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.172 11:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.172 11:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.172 11:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:16.172 11:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 [2024-07-15 11:34:26.221078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.709 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.969 11:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.969 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:18.969 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.969 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:18.969 11:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:21.507 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:21.507 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:21.507 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:21.508 11:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 [2024-07-15 11:34:29.034950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.508 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.768 11:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.768 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.768 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.768 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:21.768 11:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:23.674 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:23.674 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:23.674 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 [2024-07-15 11:34:31.806578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:23.934 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.935 11:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.873 11:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.873 11:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.873 11:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.873 11:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:24.873 11:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:26.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 [2024-07-15 11:34:34.666619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 [2024-07-15 11:34:34.714679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.780 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.780 [2024-07-15 11:34:34.762876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 [2024-07-15 11:34:34.811052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 [2024-07-15 11:34:34.859228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.041 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:27.041 "tick_rate": 2700000000, 00:08:27.041 "poll_groups": [ 00:08:27.041 { 00:08:27.041 "name": "nvmf_tgt_poll_group_000", 00:08:27.041 "admin_qpairs": 2, 00:08:27.041 "io_qpairs": 84, 00:08:27.041 "current_admin_qpairs": 0, 00:08:27.041 "current_io_qpairs": 0, 00:08:27.041 "pending_bdev_io": 0, 00:08:27.041 "completed_nvme_io": 139, 00:08:27.041 "transports": [ 00:08:27.041 { 00:08:27.041 "trtype": "TCP" 00:08:27.041 } 00:08:27.041 ] 00:08:27.041 }, 00:08:27.041 { 00:08:27.041 "name": "nvmf_tgt_poll_group_001", 00:08:27.041 "admin_qpairs": 2, 00:08:27.041 "io_qpairs": 84, 00:08:27.041 "current_admin_qpairs": 0, 00:08:27.041 "current_io_qpairs": 0, 00:08:27.042 "pending_bdev_io": 0, 00:08:27.042 "completed_nvme_io": 140, 00:08:27.042 "transports": [ 00:08:27.042 { 00:08:27.042 "trtype": "TCP" 00:08:27.042 } 00:08:27.042 ] 00:08:27.042 }, 00:08:27.042 { 00:08:27.042 "name": "nvmf_tgt_poll_group_002", 00:08:27.042 "admin_qpairs": 1, 00:08:27.042 "io_qpairs": 84, 00:08:27.042 "current_admin_qpairs": 0, 00:08:27.042 "current_io_qpairs": 0, 00:08:27.042 "pending_bdev_io": 0, 00:08:27.042 "completed_nvme_io": 180, 00:08:27.042 "transports": [ 00:08:27.042 { 00:08:27.042 "trtype": "TCP" 00:08:27.042 } 00:08:27.042 ] 00:08:27.042 }, 00:08:27.042 { 00:08:27.042 "name": "nvmf_tgt_poll_group_003", 00:08:27.042 "admin_qpairs": 2, 00:08:27.042 "io_qpairs": 84, 00:08:27.042 "current_admin_qpairs": 0, 00:08:27.042 "current_io_qpairs": 0, 00:08:27.042 "pending_bdev_io": 0, 00:08:27.042 "completed_nvme_io": 227, 00:08:27.042 "transports": [ 00:08:27.042 { 00:08:27.042 "trtype": "TCP" 00:08:27.042 } 00:08:27.042 ] 00:08:27.042 } 00:08:27.042 ] 00:08:27.042 }' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.042 11:34:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.042 rmmod nvme_tcp 00:08:27.042 rmmod nvme_fabrics 00:08:27.301 rmmod nvme_keyring 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2938616 ']' 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2938616 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2938616 ']' 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2938616 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2938616 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2938616' 00:08:27.301 killing process with pid 2938616 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2938616 00:08:27.301 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2938616 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.561 11:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.468 11:34:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.468 00:08:29.468 real 0m25.459s 00:08:29.468 user 1m22.723s 00:08:29.468 sys 0m4.175s 00:08:29.468 11:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.468 11:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.468 ************************************ 00:08:29.468 END TEST nvmf_rpc 00:08:29.468 ************************************ 00:08:29.468 11:34:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.468 11:34:37 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:29.468 11:34:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.468 11:34:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.468 11:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.727 ************************************ 00:08:29.727 START TEST nvmf_invalid 00:08:29.727 ************************************ 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:29.727 * Looking for test storage... 00:08:29.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:29.727 11:34:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.728 11:34:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:32.262 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:32.262 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.262 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:32.263 Found net devices under 0000:84:00.0: cvl_0_0 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:32.263 Found net devices under 0000:84:00.1: cvl_0_1 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:32.263 00:08:32.263 --- 10.0.0.2 ping statistics --- 00:08:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.263 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:32.263 00:08:32.263 --- 10.0.0.1 ping statistics --- 00:08:32.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.263 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2943132 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2943132 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2943132 ']' 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.263 11:34:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 [2024-07-15 11:34:39.911103] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:32.263 [2024-07-15 11:34:39.911193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.263 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.263 [2024-07-15 11:34:39.975955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.263 [2024-07-15 11:34:40.083834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.263 [2024-07-15 11:34:40.083894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.263 [2024-07-15 11:34:40.083919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.263 [2024-07-15 11:34:40.083930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.263 [2024-07-15 11:34:40.083939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.263 [2024-07-15 11:34:40.084004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.263 [2024-07-15 11:34:40.084100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.263 [2024-07-15 11:34:40.084165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.263 [2024-07-15 11:34:40.084168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.263 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:32.521 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17214 00:08:32.779 [2024-07-15 11:34:40.526507] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:32.779 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:32.779 { 00:08:32.779 "nqn": "nqn.2016-06.io.spdk:cnode17214", 00:08:32.779 "tgt_name": "foobar", 00:08:32.779 "method": "nvmf_create_subsystem", 00:08:32.779 "req_id": 1 00:08:32.779 } 00:08:32.779 Got JSON-RPC error response 00:08:32.779 response: 00:08:32.779 { 00:08:32.779 "code": -32603, 00:08:32.779 "message": "Unable to find target foobar" 00:08:32.779 }' 00:08:32.779 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:32.779 { 00:08:32.779 "nqn": "nqn.2016-06.io.spdk:cnode17214", 00:08:32.779 "tgt_name": "foobar", 00:08:32.779 "method": "nvmf_create_subsystem", 00:08:32.779 "req_id": 1 00:08:32.779 } 00:08:32.779 Got JSON-RPC error response 00:08:32.779 response: 00:08:32.779 { 00:08:32.779 "code": -32603, 00:08:32.780 "message": "Unable to find target foobar" 00:08:32.780 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:32.780 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:32.780 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3020 00:08:33.065 [2024-07-15 11:34:40.787439] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3020: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:33.065 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:33.065 { 00:08:33.065 "nqn": "nqn.2016-06.io.spdk:cnode3020", 00:08:33.065 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:33.065 "method": "nvmf_create_subsystem", 00:08:33.065 "req_id": 1 00:08:33.065 } 00:08:33.065 Got JSON-RPC error response 00:08:33.065 response: 00:08:33.065 { 00:08:33.065 "code": -32602, 00:08:33.065 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:33.065 }' 00:08:33.065 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:33.065 { 00:08:33.065 "nqn": "nqn.2016-06.io.spdk:cnode3020", 00:08:33.065 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:33.065 "method": "nvmf_create_subsystem", 00:08:33.065 "req_id": 1 00:08:33.065 } 00:08:33.065 Got JSON-RPC error response 00:08:33.065 response: 00:08:33.065 { 00:08:33.065 "code": -32602, 00:08:33.065 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:33.065 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:33.065 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:33.065 11:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18775 00:08:33.367 [2024-07-15 11:34:41.036219] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18775: invalid model number 'SPDK_Controller' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:33.367 { 00:08:33.367 "nqn": "nqn.2016-06.io.spdk:cnode18775", 00:08:33.367 "model_number": "SPDK_Controller\u001f", 00:08:33.367 "method": "nvmf_create_subsystem", 00:08:33.367 "req_id": 1 00:08:33.367 } 00:08:33.367 Got JSON-RPC error response 00:08:33.367 response: 00:08:33.367 { 00:08:33.367 "code": -32602, 00:08:33.367 "message": "Invalid MN SPDK_Controller\u001f" 00:08:33.367 }' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:33.367 { 00:08:33.367 "nqn": "nqn.2016-06.io.spdk:cnode18775", 00:08:33.367 "model_number": "SPDK_Controller\u001f", 00:08:33.367 "method": "nvmf_create_subsystem", 00:08:33.367 "req_id": 1 00:08:33.367 } 00:08:33.367 Got JSON-RPC error response 00:08:33.367 response: 00:08:33.367 { 00:08:33.367 "code": -32602, 00:08:33.367 "message": "Invalid MN SPDK_Controller\u001f" 00:08:33.367 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.367 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ujk%fLD~bYXjs1:lrxl)7' 00:08:33.368 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ujk%fLD~bYXjs1:lrxl)7' nqn.2016-06.io.spdk:cnode14077 00:08:33.627 [2024-07-15 11:34:41.349300] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14077: invalid serial number 'ujk%fLD~bYXjs1:lrxl)7' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:33.627 { 00:08:33.627 "nqn": "nqn.2016-06.io.spdk:cnode14077", 00:08:33.627 "serial_number": "ujk%fLD~bYXjs1:lrxl)7", 00:08:33.627 "method": "nvmf_create_subsystem", 00:08:33.627 "req_id": 1 00:08:33.627 } 00:08:33.627 Got JSON-RPC error response 00:08:33.627 response: 00:08:33.627 { 00:08:33.627 "code": -32602, 00:08:33.627 "message": "Invalid SN ujk%fLD~bYXjs1:lrxl)7" 00:08:33.627 }' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:33.627 { 00:08:33.627 "nqn": "nqn.2016-06.io.spdk:cnode14077", 00:08:33.627 "serial_number": "ujk%fLD~bYXjs1:lrxl)7", 00:08:33.627 "method": "nvmf_create_subsystem", 00:08:33.627 "req_id": 1 00:08:33.627 } 00:08:33.627 Got JSON-RPC error response 00:08:33.627 response: 00:08:33.627 { 00:08:33.627 "code": -32602, 00:08:33.627 "message": "Invalid SN ujk%fLD~bYXjs1:lrxl)7" 00:08:33.627 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.627 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.m1C[ xdj1J,c{Wx--f"{O+v.-A`.ge#T*~%-oxF,' 00:08:33.628 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '.m1C[ xdj1J,c{Wx--f"{O+v.-A`.ge#T*~%-oxF,' nqn.2016-06.io.spdk:cnode30102 00:08:33.886 [2024-07-15 11:34:41.758607] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30102: invalid model number '.m1C[ xdj1J,c{Wx--f"{O+v.-A`.ge#T*~%-oxF,' 00:08:33.886 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:33.886 { 00:08:33.886 "nqn": "nqn.2016-06.io.spdk:cnode30102", 00:08:33.886 "model_number": ".m1C[ xdj1J,c{Wx--f\"{O+v.-A`.ge#T*~%-oxF,", 00:08:33.886 "method": "nvmf_create_subsystem", 00:08:33.886 "req_id": 1 00:08:33.886 } 00:08:33.886 Got JSON-RPC error response 00:08:33.886 response: 00:08:33.886 { 00:08:33.886 "code": -32602, 00:08:33.886 "message": "Invalid MN .m1C[ xdj1J,c{Wx--f\"{O+v.-A`.ge#T*~%-oxF," 00:08:33.886 }' 00:08:33.886 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:33.886 { 00:08:33.886 "nqn": "nqn.2016-06.io.spdk:cnode30102", 00:08:33.886 "model_number": ".m1C[ xdj1J,c{Wx--f\"{O+v.-A`.ge#T*~%-oxF,", 00:08:33.886 "method": "nvmf_create_subsystem", 00:08:33.886 "req_id": 1 00:08:33.886 } 00:08:33.886 Got JSON-RPC error response 00:08:33.886 response: 00:08:33.886 { 00:08:33.886 "code": -32602, 00:08:33.886 "message": "Invalid MN .m1C[ xdj1J,c{Wx--f\"{O+v.-A`.ge#T*~%-oxF," 00:08:33.886 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:33.886 11:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:34.143 [2024-07-15 11:34:42.011587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.143 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:34.401 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:34.401 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:34.401 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:34.401 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:34.401 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:34.658 [2024-07-15 11:34:42.517216] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:34.658 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:34.658 { 00:08:34.658 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:34.658 "listen_address": { 00:08:34.658 "trtype": "tcp", 00:08:34.658 "traddr": "", 00:08:34.658 "trsvcid": "4421" 00:08:34.658 }, 00:08:34.658 "method": "nvmf_subsystem_remove_listener", 00:08:34.658 "req_id": 1 00:08:34.658 } 00:08:34.658 Got JSON-RPC error response 00:08:34.658 response: 00:08:34.658 { 00:08:34.658 "code": -32602, 00:08:34.658 "message": "Invalid parameters" 00:08:34.658 }' 00:08:34.658 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:34.658 { 00:08:34.658 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:34.658 "listen_address": { 00:08:34.658 "trtype": "tcp", 00:08:34.658 "traddr": "", 00:08:34.658 "trsvcid": "4421" 00:08:34.658 }, 00:08:34.658 "method": "nvmf_subsystem_remove_listener", 00:08:34.658 "req_id": 1 00:08:34.658 } 00:08:34.658 Got JSON-RPC error response 00:08:34.658 response: 00:08:34.658 { 00:08:34.658 "code": -32602, 00:08:34.658 "message": "Invalid parameters" 00:08:34.658 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:34.658 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13643 -i 0 00:08:34.916 [2024-07-15 11:34:42.757950] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13643: invalid cntlid range [0-65519] 00:08:34.916 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:34.916 { 00:08:34.916 "nqn": "nqn.2016-06.io.spdk:cnode13643", 00:08:34.916 "min_cntlid": 0, 00:08:34.916 "method": "nvmf_create_subsystem", 00:08:34.916 "req_id": 1 00:08:34.916 } 00:08:34.916 Got JSON-RPC error response 00:08:34.916 response: 00:08:34.916 { 00:08:34.916 "code": -32602, 00:08:34.916 "message": "Invalid cntlid range [0-65519]" 00:08:34.916 }' 00:08:34.916 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:34.916 { 00:08:34.916 "nqn": "nqn.2016-06.io.spdk:cnode13643", 00:08:34.916 "min_cntlid": 0, 00:08:34.916 "method": "nvmf_create_subsystem", 00:08:34.916 "req_id": 1 00:08:34.916 } 00:08:34.916 Got JSON-RPC error response 00:08:34.916 response: 00:08:34.916 { 00:08:34.916 "code": -32602, 00:08:34.916 "message": "Invalid cntlid range [0-65519]" 00:08:34.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:34.916 11:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22560 -i 65520 00:08:35.174 [2024-07-15 11:34:43.010785] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22560: invalid cntlid range [65520-65519] 00:08:35.174 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:35.174 { 00:08:35.174 "nqn": "nqn.2016-06.io.spdk:cnode22560", 00:08:35.174 "min_cntlid": 65520, 00:08:35.174 "method": "nvmf_create_subsystem", 00:08:35.174 "req_id": 1 00:08:35.174 } 00:08:35.174 Got JSON-RPC error response 00:08:35.174 response: 00:08:35.174 { 00:08:35.174 "code": -32602, 00:08:35.174 "message": "Invalid cntlid range [65520-65519]" 00:08:35.174 }' 00:08:35.174 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:35.174 { 00:08:35.174 "nqn": "nqn.2016-06.io.spdk:cnode22560", 00:08:35.174 "min_cntlid": 65520, 00:08:35.174 "method": "nvmf_create_subsystem", 00:08:35.174 "req_id": 1 00:08:35.174 } 00:08:35.174 Got JSON-RPC error response 00:08:35.174 response: 00:08:35.174 { 00:08:35.174 "code": -32602, 00:08:35.174 "message": "Invalid cntlid range [65520-65519]" 00:08:35.174 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.174 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31603 -I 0 00:08:35.431 [2024-07-15 11:34:43.255607] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31603: invalid cntlid range [1-0] 00:08:35.431 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:35.431 { 00:08:35.431 "nqn": "nqn.2016-06.io.spdk:cnode31603", 00:08:35.431 "max_cntlid": 0, 00:08:35.431 "method": "nvmf_create_subsystem", 00:08:35.431 "req_id": 1 00:08:35.431 } 00:08:35.431 Got JSON-RPC error response 00:08:35.431 response: 00:08:35.431 { 00:08:35.431 "code": -32602, 00:08:35.431 "message": "Invalid cntlid range [1-0]" 00:08:35.431 }' 00:08:35.431 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:35.431 { 00:08:35.431 "nqn": "nqn.2016-06.io.spdk:cnode31603", 00:08:35.431 "max_cntlid": 0, 00:08:35.431 "method": "nvmf_create_subsystem", 00:08:35.431 "req_id": 1 00:08:35.431 } 00:08:35.431 Got JSON-RPC error response 00:08:35.431 response: 00:08:35.431 { 00:08:35.431 "code": -32602, 00:08:35.431 "message": "Invalid cntlid range [1-0]" 00:08:35.431 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.431 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23113 -I 65520 00:08:35.688 [2024-07-15 11:34:43.504454] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23113: invalid cntlid range [1-65520] 00:08:35.688 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:35.688 { 00:08:35.688 "nqn": "nqn.2016-06.io.spdk:cnode23113", 00:08:35.688 "max_cntlid": 65520, 00:08:35.688 "method": "nvmf_create_subsystem", 00:08:35.688 "req_id": 1 00:08:35.688 } 00:08:35.688 Got JSON-RPC error response 00:08:35.688 response: 00:08:35.688 { 00:08:35.688 "code": -32602, 00:08:35.688 "message": "Invalid cntlid range [1-65520]" 00:08:35.688 }' 00:08:35.688 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:35.688 { 00:08:35.688 "nqn": "nqn.2016-06.io.spdk:cnode23113", 00:08:35.688 "max_cntlid": 65520, 00:08:35.688 "method": "nvmf_create_subsystem", 00:08:35.688 "req_id": 1 00:08:35.688 } 00:08:35.688 Got JSON-RPC error response 00:08:35.688 response: 00:08:35.688 { 00:08:35.688 "code": -32602, 00:08:35.688 "message": "Invalid cntlid range [1-65520]" 00:08:35.688 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.688 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12692 -i 6 -I 5 00:08:35.945 [2024-07-15 11:34:43.749291] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12692: invalid cntlid range [6-5] 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:35.945 { 00:08:35.945 "nqn": "nqn.2016-06.io.spdk:cnode12692", 00:08:35.945 "min_cntlid": 6, 00:08:35.945 "max_cntlid": 5, 00:08:35.945 "method": "nvmf_create_subsystem", 00:08:35.945 "req_id": 1 00:08:35.945 } 00:08:35.945 Got JSON-RPC error response 00:08:35.945 response: 00:08:35.945 { 00:08:35.945 "code": -32602, 00:08:35.945 "message": "Invalid cntlid range [6-5]" 00:08:35.945 }' 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:35.945 { 00:08:35.945 "nqn": "nqn.2016-06.io.spdk:cnode12692", 00:08:35.945 "min_cntlid": 6, 00:08:35.945 "max_cntlid": 5, 00:08:35.945 "method": "nvmf_create_subsystem", 00:08:35.945 "req_id": 1 00:08:35.945 } 00:08:35.945 Got JSON-RPC error response 00:08:35.945 response: 00:08:35.945 { 00:08:35.945 "code": -32602, 00:08:35.945 "message": "Invalid cntlid range [6-5]" 00:08:35.945 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:35.945 { 00:08:35.945 "name": "foobar", 00:08:35.945 "method": "nvmf_delete_target", 00:08:35.945 "req_id": 1 00:08:35.945 } 00:08:35.945 Got JSON-RPC error response 00:08:35.945 response: 00:08:35.945 { 00:08:35.945 "code": -32602, 00:08:35.945 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:35.945 }' 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:35.945 { 00:08:35.945 "name": "foobar", 00:08:35.945 "method": "nvmf_delete_target", 00:08:35.945 "req_id": 1 00:08:35.945 } 00:08:35.945 Got JSON-RPC error response 00:08:35.945 response: 00:08:35.945 { 00:08:35.945 "code": -32602, 00:08:35.945 "message": "The specified target doesn't exist, cannot delete it." 00:08:35.945 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.945 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.945 rmmod nvme_tcp 00:08:35.945 rmmod nvme_fabrics 00:08:35.945 rmmod nvme_keyring 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2943132 ']' 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2943132 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2943132 ']' 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2943132 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2943132 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2943132' 00:08:36.204 killing process with pid 2943132 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2943132 00:08:36.204 11:34:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2943132 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.464 11:34:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.465 11:34:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.369 11:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.369 00:08:38.369 real 0m8.801s 00:08:38.369 user 0m20.134s 00:08:38.369 sys 0m2.534s 00:08:38.369 11:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.369 11:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:38.369 ************************************ 00:08:38.369 END TEST nvmf_invalid 00:08:38.369 ************************************ 00:08:38.369 11:34:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.369 11:34:46 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:38.369 11:34:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.369 11:34:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.369 11:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.369 ************************************ 00:08:38.369 START TEST nvmf_abort 00:08:38.369 ************************************ 00:08:38.369 11:34:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:38.627 * Looking for test storage... 00:08:38.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.627 11:34:46 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.628 11:34:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:40.529 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:40.529 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.529 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:40.788 Found net devices under 0000:84:00.0: cvl_0_0 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:40.788 Found net devices under 0000:84:00.1: cvl_0_1 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:40.788 00:08:40.788 --- 10.0.0.2 ping statistics --- 00:08:40.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.788 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:08:40.788 00:08:40.788 --- 10.0.0.1 ping statistics --- 00:08:40.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.788 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2945785 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2945785 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2945785 ']' 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.788 11:34:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:40.788 [2024-07-15 11:34:48.740024] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:40.788 [2024-07-15 11:34:48.740121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.788 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.047 [2024-07-15 11:34:48.803667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.047 [2024-07-15 11:34:48.905075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.047 [2024-07-15 11:34:48.905135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.047 [2024-07-15 11:34:48.905158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.047 [2024-07-15 11:34:48.905168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.047 [2024-07-15 11:34:48.905178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.047 [2024-07-15 11:34:48.905264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.047 [2024-07-15 11:34:48.905328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.047 [2024-07-15 11:34:48.905331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.047 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.047 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:41.047 11:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.047 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.047 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 [2024-07-15 11:34:49.048732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 Malloc0 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 Delay0 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 [2024-07-15 11:34:49.114619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.305 11:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:41.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.305 [2024-07-15 11:34:49.219634] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:43.855 Initializing NVMe Controllers 00:08:43.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:43.855 controller IO queue size 128 less than required 00:08:43.855 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:43.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:43.855 Initialization complete. Launching workers. 00:08:43.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33687 00:08:43.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33748, failed to submit 62 00:08:43.855 success 33691, unsuccess 57, failed 0 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.855 rmmod nvme_tcp 00:08:43.855 rmmod nvme_fabrics 00:08:43.855 rmmod nvme_keyring 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2945785 ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2945785 ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2945785' 00:08:43.855 killing process with pid 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2945785 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.855 11:34:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.762 11:34:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.762 00:08:45.762 real 0m7.354s 00:08:45.762 user 0m10.253s 00:08:45.762 sys 0m2.701s 00:08:45.762 11:34:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.762 11:34:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 ************************************ 00:08:45.762 END TEST nvmf_abort 00:08:45.762 ************************************ 00:08:45.762 11:34:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:45.762 11:34:53 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:45.762 11:34:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.762 11:34:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.762 11:34:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 ************************************ 00:08:45.762 START TEST nvmf_ns_hotplug_stress 00:08:45.762 ************************************ 00:08:45.762 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:46.021 * Looking for test storage... 00:08:46.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.021 11:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.552 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:48.553 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:48.553 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:48.553 Found net devices under 0000:84:00.0: cvl_0_0 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:48.553 Found net devices under 0000:84:00.1: cvl_0_1 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.553 11:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:48.553 00:08:48.553 --- 10.0.0.2 ping statistics --- 00:08:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.553 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:48.553 00:08:48.553 --- 10.0.0.1 ping statistics --- 00:08:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.553 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2948027 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2948027 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2948027 ']' 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.553 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.553 [2024-07-15 11:34:56.157374] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:48.553 [2024-07-15 11:34:56.157458] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.553 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.553 [2024-07-15 11:34:56.220238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.553 [2024-07-15 11:34:56.323918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.553 [2024-07-15 11:34:56.323975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.553 [2024-07-15 11:34:56.324003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.553 [2024-07-15 11:34:56.324014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.554 [2024-07-15 11:34:56.324024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.554 [2024-07-15 11:34:56.324107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.554 [2024-07-15 11:34:56.324175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.554 [2024-07-15 11:34:56.324177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:48.554 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.811 [2024-07-15 11:34:56.739818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.811 11:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.069 11:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.327 [2024-07-15 11:34:57.250624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.327 11:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.584 11:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:49.842 Malloc0 00:08:49.842 11:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.098 Delay0 00:08:50.098 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.355 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:50.612 NULL1 00:08:50.613 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:50.870 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2948442 00:08:50.870 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:50.870 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:50.870 11:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.870 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.244 Read completed with error (sct=0, sc=11) 00:08:52.244 11:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.502 11:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:52.502 11:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:52.759 true 00:08:52.759 11:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:52.759 11:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.323 11:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.579 11:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:53.579 11:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:53.836 true 00:08:53.836 11:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:53.836 11:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.093 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.350 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:54.350 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:54.607 true 00:08:54.607 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:54.607 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.863 11:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.120 11:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:55.121 11:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:55.378 true 00:08:55.378 11:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:55.378 11:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.750 11:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.750 11:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:56.750 11:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:57.014 true 00:08:57.014 11:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:57.014 11:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.316 11:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.580 11:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:57.580 11:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:57.580 true 00:08:57.839 11:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:57.839 11:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.406 11:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.663 11:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:58.663 11:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:58.920 true 00:08:58.920 11:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:58.920 11:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.196 11:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.453 11:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:59.453 11:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:59.710 true 00:08:59.710 11:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:08:59.710 11:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.643 11:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.901 11:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:00.901 11:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:01.158 true 00:09:01.158 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:01.158 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.415 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.673 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:01.673 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:01.930 true 00:09:01.930 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:01.930 11:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.187 11:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.446 11:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:02.446 11:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:02.703 true 00:09:02.703 11:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:02.703 11:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.080 11:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.080 11:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:04.080 11:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:04.337 true 00:09:04.337 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:04.337 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.595 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.852 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:04.852 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:05.110 true 00:09:05.110 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:05.110 11:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.047 11:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.047 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:06.047 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:06.305 true 00:09:06.563 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:06.563 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.822 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.822 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:06.822 11:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:07.081 true 00:09:07.081 11:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:07.081 11:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.014 11:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.271 11:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:08.271 11:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:08.529 true 00:09:08.529 11:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:08.529 11:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.787 11:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.044 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:09.044 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:09.301 true 00:09:09.301 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:09.301 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.557 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.815 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:09.815 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:10.072 true 00:09:10.072 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:10.072 11:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.448 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.448 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:11.448 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:11.705 true 00:09:11.705 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:11.705 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.963 11:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.221 11:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:12.221 11:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:12.478 true 00:09:12.478 11:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:12.478 11:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.411 11:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.668 11:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:13.668 11:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:13.926 true 00:09:13.926 11:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:13.926 11:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.184 11:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.442 11:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:14.442 11:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:14.699 true 00:09:14.699 11:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:14.699 11:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.638 11:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.896 11:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:15.896 11:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:16.153 true 00:09:16.153 11:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:16.153 11:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.411 11:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.669 11:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:16.669 11:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:16.926 true 00:09:16.926 11:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:16.926 11:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.872 11:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.872 11:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:17.872 11:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:18.129 true 00:09:18.129 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:18.129 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.386 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.690 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:18.690 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:18.976 true 00:09:18.976 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:18.976 11:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.911 11:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.167 11:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:20.167 11:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:20.425 true 00:09:20.425 11:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:20.425 11:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.682 11:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.939 11:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:20.939 11:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:21.197 true 00:09:21.197 11:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:21.197 11:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.131 Initializing NVMe Controllers 00:09:22.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:22.131 Controller IO queue size 128, less than required. 00:09:22.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.131 Controller IO queue size 128, less than required. 00:09:22.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:22.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:22.131 Initialization complete. Launching workers. 00:09:22.131 ======================================================== 00:09:22.131 Latency(us) 00:09:22.131 Device Information : IOPS MiB/s Average min max 00:09:22.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 833.73 0.41 79882.52 2792.62 1012112.13 00:09:22.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10718.68 5.23 11943.75 3387.70 365327.69 00:09:22.131 ======================================================== 00:09:22.131 Total : 11552.40 5.64 16846.84 2792.62 1012112.13 00:09:22.131 00:09:22.131 11:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.389 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:22.389 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:22.647 true 00:09:22.647 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2948442 00:09:22.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2948442) - No such process 00:09:22.647 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2948442 00:09:22.647 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.905 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.163 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:23.163 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:23.163 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:23.163 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.163 11:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:23.421 null0 00:09:23.421 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.421 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.421 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:23.679 null1 00:09:23.679 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.679 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.679 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:23.937 null2 00:09:23.937 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.937 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.937 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:24.195 null3 00:09:24.195 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.195 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.195 11:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:24.195 null4 00:09:24.195 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.195 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.195 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:24.452 null5 00:09:24.452 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.452 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.452 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:24.710 null6 00:09:24.710 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.710 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.710 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:24.967 null7 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.967 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2952511 2952512 2952513 2952516 2952518 2952520 2952522 2952524 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.968 11:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.226 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.489 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.746 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.004 11:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.262 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.520 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.777 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.034 11:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.291 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.548 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.549 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.804 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.804 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:27.805 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:28.062 11:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.319 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:28.580 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.837 11:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.094 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.353 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.611 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.611 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.611 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.611 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.611 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.869 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.869 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.869 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:30.126 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.127 11:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.385 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.643 rmmod nvme_tcp 00:09:30.643 rmmod nvme_fabrics 00:09:30.643 rmmod nvme_keyring 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2948027 ']' 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2948027 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2948027 ']' 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2948027 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2948027 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2948027' 00:09:30.643 killing process with pid 2948027 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2948027 00:09:30.643 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2948027 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.901 11:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.437 11:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.437 00:09:33.437 real 0m47.140s 00:09:33.437 user 3m33.766s 00:09:33.437 sys 0m17.180s 00:09:33.437 11:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.437 11:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.437 ************************************ 00:09:33.437 END TEST nvmf_ns_hotplug_stress 00:09:33.437 ************************************ 00:09:33.437 11:35:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.437 11:35:40 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:33.437 11:35:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.437 11:35:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.437 11:35:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.437 ************************************ 00:09:33.437 START TEST nvmf_connect_stress 00:09:33.437 ************************************ 00:09:33.437 11:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:33.437 * Looking for test storage... 00:09:33.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.437 11:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.437 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:33.437 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.438 11:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:35.342 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:35.342 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:35.342 Found net devices under 0000:84:00.0: cvl_0_0 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:35.342 Found net devices under 0000:84:00.1: cvl_0_1 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.342 11:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:09:35.342 00:09:35.342 --- 10.0.0.2 ping statistics --- 00:09:35.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.342 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:35.342 00:09:35.342 --- 10.0.0.1 ping statistics --- 00:09:35.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.342 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.342 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2955287 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2955287 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2955287 ']' 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.343 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.343 [2024-07-15 11:35:43.155036] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:35.343 [2024-07-15 11:35:43.155102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.343 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.343 [2024-07-15 11:35:43.216511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.599 [2024-07-15 11:35:43.330118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.599 [2024-07-15 11:35:43.330182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.599 [2024-07-15 11:35:43.330196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.599 [2024-07-15 11:35:43.330227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.599 [2024-07-15 11:35:43.330237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.599 [2024-07-15 11:35:43.330325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.599 [2024-07-15 11:35:43.330391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.599 [2024-07-15 11:35:43.330395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.599 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 [2024-07-15 11:35:43.463466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 [2024-07-15 11:35:43.488889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.600 NULL1 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2955427 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.600 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.162 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.162 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:36.162 11:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.162 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.162 11:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.418 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.418 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:36.418 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.418 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.418 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.674 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.674 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:36.674 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.674 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.674 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.931 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.931 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:36.931 11:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.931 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.931 11:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.188 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.188 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:37.188 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.188 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.188 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.752 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.752 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:37.752 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.752 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.752 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.010 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.010 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:38.010 11:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.010 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.010 11:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.267 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.267 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:38.267 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.267 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.267 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.525 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.525 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:38.525 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.525 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.525 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.783 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.783 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:38.783 11:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.783 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.783 11:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.347 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.347 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:39.347 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.347 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.347 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.605 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.605 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:39.605 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.605 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.605 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.866 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.866 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:39.866 11:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.866 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.866 11:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:40.142 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.142 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.416 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.416 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:40.416 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.416 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.416 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.981 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.981 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:40.981 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.981 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.981 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.237 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.237 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:41.237 11:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.237 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.237 11:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.494 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.494 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:41.494 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.494 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.494 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.750 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.751 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:41.751 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.751 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.751 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.008 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.008 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:42.008 11:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.008 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.008 11:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.572 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.572 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:42.572 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.572 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.572 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.829 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.829 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:42.829 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.829 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.829 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.087 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.087 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:43.087 11:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.087 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.087 11:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.344 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.344 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:43.344 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.344 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.344 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.601 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.601 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:43.601 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.601 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.601 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.166 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.166 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:44.166 11:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.166 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.166 11:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.423 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:44.423 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.423 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.423 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.681 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.681 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:44.681 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.681 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.681 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.938 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.938 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:44.938 11:35:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.938 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.938 11:35:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.196 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.196 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:45.196 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.196 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.196 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.759 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.759 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:45.759 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.759 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.759 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.759 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2955427 00:09:46.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2955427) - No such process 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2955427 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.017 rmmod nvme_tcp 00:09:46.017 rmmod nvme_fabrics 00:09:46.017 rmmod nvme_keyring 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2955287 ']' 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2955287 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2955287 ']' 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2955287 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2955287 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2955287' 00:09:46.017 killing process with pid 2955287 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2955287 00:09:46.017 11:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2955287 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.275 11:35:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.812 11:35:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.812 00:09:48.812 real 0m15.302s 00:09:48.812 user 0m37.835s 00:09:48.812 sys 0m6.336s 00:09:48.812 11:35:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.812 11:35:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.812 ************************************ 00:09:48.812 END TEST nvmf_connect_stress 00:09:48.812 ************************************ 00:09:48.812 11:35:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.812 11:35:56 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:48.812 11:35:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.812 11:35:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.812 11:35:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.812 ************************************ 00:09:48.812 START TEST nvmf_fused_ordering 00:09:48.812 ************************************ 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:48.812 * Looking for test storage... 00:09:48.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.812 11:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:50.716 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:50.716 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:50.716 Found net devices under 0000:84:00.0: cvl_0_0 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.716 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:50.717 Found net devices under 0000:84:00.1: cvl_0_1 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:09:50.717 00:09:50.717 --- 10.0.0.2 ping statistics --- 00:09:50.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.717 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:09:50.717 00:09:50.717 --- 10.0.0.1 ping statistics --- 00:09:50.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.717 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2958593 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2958593 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2958593 ']' 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.717 11:35:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:50.717 [2024-07-15 11:35:58.598590] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:50.717 [2024-07-15 11:35:58.598672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.717 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.717 [2024-07-15 11:35:58.664568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.975 [2024-07-15 11:35:58.772511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.975 [2024-07-15 11:35:58.772578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.975 [2024-07-15 11:35:58.772591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.975 [2024-07-15 11:35:58.772602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.975 [2024-07-15 11:35:58.772612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.975 [2024-07-15 11:35:58.772637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.540 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.540 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:51.540 11:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.540 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.540 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 [2024-07-15 11:35:59.555578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 [2024-07-15 11:35:59.571747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 NULL1 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.799 11:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:51.799 [2024-07-15 11:35:59.615660] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:51.799 [2024-07-15 11:35:59.615697] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958749 ] 00:09:51.799 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.366 Attached to nqn.2016-06.io.spdk:cnode1 00:09:52.366 Namespace ID: 1 size: 1GB 00:09:52.366 fused_ordering(0) 00:09:52.366 fused_ordering(1) 00:09:52.366 fused_ordering(2) 00:09:52.366 fused_ordering(3) 00:09:52.366 fused_ordering(4) 00:09:52.366 fused_ordering(5) 00:09:52.366 fused_ordering(6) 00:09:52.366 fused_ordering(7) 00:09:52.366 fused_ordering(8) 00:09:52.366 fused_ordering(9) 00:09:52.366 fused_ordering(10) 00:09:52.366 fused_ordering(11) 00:09:52.366 fused_ordering(12) 00:09:52.366 fused_ordering(13) 00:09:52.366 fused_ordering(14) 00:09:52.366 fused_ordering(15) 00:09:52.366 fused_ordering(16) 00:09:52.366 fused_ordering(17) 00:09:52.366 fused_ordering(18) 00:09:52.366 fused_ordering(19) 00:09:52.366 fused_ordering(20) 00:09:52.366 fused_ordering(21) 00:09:52.366 fused_ordering(22) 00:09:52.366 fused_ordering(23) 00:09:52.366 fused_ordering(24) 00:09:52.366 fused_ordering(25) 00:09:52.366 fused_ordering(26) 00:09:52.366 fused_ordering(27) 00:09:52.366 fused_ordering(28) 00:09:52.366 fused_ordering(29) 00:09:52.366 fused_ordering(30) 00:09:52.366 fused_ordering(31) 00:09:52.366 fused_ordering(32) 00:09:52.366 fused_ordering(33) 00:09:52.366 fused_ordering(34) 00:09:52.366 fused_ordering(35) 00:09:52.366 fused_ordering(36) 00:09:52.366 fused_ordering(37) 00:09:52.366 fused_ordering(38) 00:09:52.366 fused_ordering(39) 00:09:52.366 fused_ordering(40) 00:09:52.366 fused_ordering(41) 00:09:52.366 fused_ordering(42) 00:09:52.366 fused_ordering(43) 00:09:52.366 fused_ordering(44) 00:09:52.366 fused_ordering(45) 00:09:52.366 fused_ordering(46) 00:09:52.366 fused_ordering(47) 00:09:52.366 fused_ordering(48) 00:09:52.366 fused_ordering(49) 00:09:52.366 fused_ordering(50) 00:09:52.366 fused_ordering(51) 00:09:52.366 fused_ordering(52) 00:09:52.366 fused_ordering(53) 00:09:52.366 fused_ordering(54) 00:09:52.366 fused_ordering(55) 00:09:52.366 fused_ordering(56) 00:09:52.366 fused_ordering(57) 00:09:52.366 fused_ordering(58) 00:09:52.366 fused_ordering(59) 00:09:52.366 fused_ordering(60) 00:09:52.366 fused_ordering(61) 00:09:52.366 fused_ordering(62) 00:09:52.366 fused_ordering(63) 00:09:52.366 fused_ordering(64) 00:09:52.366 fused_ordering(65) 00:09:52.366 fused_ordering(66) 00:09:52.366 fused_ordering(67) 00:09:52.366 fused_ordering(68) 00:09:52.366 fused_ordering(69) 00:09:52.366 fused_ordering(70) 00:09:52.366 fused_ordering(71) 00:09:52.366 fused_ordering(72) 00:09:52.366 fused_ordering(73) 00:09:52.366 fused_ordering(74) 00:09:52.366 fused_ordering(75) 00:09:52.366 fused_ordering(76) 00:09:52.366 fused_ordering(77) 00:09:52.366 fused_ordering(78) 00:09:52.366 fused_ordering(79) 00:09:52.366 fused_ordering(80) 00:09:52.366 fused_ordering(81) 00:09:52.366 fused_ordering(82) 00:09:52.366 fused_ordering(83) 00:09:52.366 fused_ordering(84) 00:09:52.366 fused_ordering(85) 00:09:52.366 fused_ordering(86) 00:09:52.366 fused_ordering(87) 00:09:52.366 fused_ordering(88) 00:09:52.366 fused_ordering(89) 00:09:52.366 fused_ordering(90) 00:09:52.366 fused_ordering(91) 00:09:52.366 fused_ordering(92) 00:09:52.366 fused_ordering(93) 00:09:52.366 fused_ordering(94) 00:09:52.366 fused_ordering(95) 00:09:52.366 fused_ordering(96) 00:09:52.366 fused_ordering(97) 00:09:52.366 fused_ordering(98) 00:09:52.366 fused_ordering(99) 00:09:52.366 fused_ordering(100) 00:09:52.366 fused_ordering(101) 00:09:52.366 fused_ordering(102) 00:09:52.366 fused_ordering(103) 00:09:52.366 fused_ordering(104) 00:09:52.366 fused_ordering(105) 00:09:52.366 fused_ordering(106) 00:09:52.366 fused_ordering(107) 00:09:52.366 fused_ordering(108) 00:09:52.366 fused_ordering(109) 00:09:52.366 fused_ordering(110) 00:09:52.366 fused_ordering(111) 00:09:52.366 fused_ordering(112) 00:09:52.366 fused_ordering(113) 00:09:52.366 fused_ordering(114) 00:09:52.366 fused_ordering(115) 00:09:52.366 fused_ordering(116) 00:09:52.366 fused_ordering(117) 00:09:52.366 fused_ordering(118) 00:09:52.366 fused_ordering(119) 00:09:52.366 fused_ordering(120) 00:09:52.366 fused_ordering(121) 00:09:52.366 fused_ordering(122) 00:09:52.366 fused_ordering(123) 00:09:52.366 fused_ordering(124) 00:09:52.366 fused_ordering(125) 00:09:52.366 fused_ordering(126) 00:09:52.366 fused_ordering(127) 00:09:52.366 fused_ordering(128) 00:09:52.366 fused_ordering(129) 00:09:52.366 fused_ordering(130) 00:09:52.366 fused_ordering(131) 00:09:52.366 fused_ordering(132) 00:09:52.366 fused_ordering(133) 00:09:52.366 fused_ordering(134) 00:09:52.366 fused_ordering(135) 00:09:52.366 fused_ordering(136) 00:09:52.366 fused_ordering(137) 00:09:52.366 fused_ordering(138) 00:09:52.366 fused_ordering(139) 00:09:52.366 fused_ordering(140) 00:09:52.366 fused_ordering(141) 00:09:52.366 fused_ordering(142) 00:09:52.366 fused_ordering(143) 00:09:52.366 fused_ordering(144) 00:09:52.366 fused_ordering(145) 00:09:52.366 fused_ordering(146) 00:09:52.366 fused_ordering(147) 00:09:52.366 fused_ordering(148) 00:09:52.366 fused_ordering(149) 00:09:52.366 fused_ordering(150) 00:09:52.366 fused_ordering(151) 00:09:52.366 fused_ordering(152) 00:09:52.366 fused_ordering(153) 00:09:52.366 fused_ordering(154) 00:09:52.366 fused_ordering(155) 00:09:52.366 fused_ordering(156) 00:09:52.366 fused_ordering(157) 00:09:52.366 fused_ordering(158) 00:09:52.366 fused_ordering(159) 00:09:52.366 fused_ordering(160) 00:09:52.366 fused_ordering(161) 00:09:52.366 fused_ordering(162) 00:09:52.366 fused_ordering(163) 00:09:52.366 fused_ordering(164) 00:09:52.366 fused_ordering(165) 00:09:52.366 fused_ordering(166) 00:09:52.366 fused_ordering(167) 00:09:52.366 fused_ordering(168) 00:09:52.366 fused_ordering(169) 00:09:52.366 fused_ordering(170) 00:09:52.366 fused_ordering(171) 00:09:52.366 fused_ordering(172) 00:09:52.366 fused_ordering(173) 00:09:52.366 fused_ordering(174) 00:09:52.366 fused_ordering(175) 00:09:52.366 fused_ordering(176) 00:09:52.366 fused_ordering(177) 00:09:52.366 fused_ordering(178) 00:09:52.366 fused_ordering(179) 00:09:52.366 fused_ordering(180) 00:09:52.366 fused_ordering(181) 00:09:52.366 fused_ordering(182) 00:09:52.366 fused_ordering(183) 00:09:52.366 fused_ordering(184) 00:09:52.366 fused_ordering(185) 00:09:52.366 fused_ordering(186) 00:09:52.366 fused_ordering(187) 00:09:52.366 fused_ordering(188) 00:09:52.366 fused_ordering(189) 00:09:52.366 fused_ordering(190) 00:09:52.366 fused_ordering(191) 00:09:52.366 fused_ordering(192) 00:09:52.366 fused_ordering(193) 00:09:52.366 fused_ordering(194) 00:09:52.366 fused_ordering(195) 00:09:52.366 fused_ordering(196) 00:09:52.366 fused_ordering(197) 00:09:52.366 fused_ordering(198) 00:09:52.366 fused_ordering(199) 00:09:52.366 fused_ordering(200) 00:09:52.366 fused_ordering(201) 00:09:52.366 fused_ordering(202) 00:09:52.366 fused_ordering(203) 00:09:52.366 fused_ordering(204) 00:09:52.366 fused_ordering(205) 00:09:52.625 fused_ordering(206) 00:09:52.625 fused_ordering(207) 00:09:52.625 fused_ordering(208) 00:09:52.625 fused_ordering(209) 00:09:52.625 fused_ordering(210) 00:09:52.625 fused_ordering(211) 00:09:52.625 fused_ordering(212) 00:09:52.625 fused_ordering(213) 00:09:52.625 fused_ordering(214) 00:09:52.625 fused_ordering(215) 00:09:52.625 fused_ordering(216) 00:09:52.625 fused_ordering(217) 00:09:52.625 fused_ordering(218) 00:09:52.625 fused_ordering(219) 00:09:52.625 fused_ordering(220) 00:09:52.625 fused_ordering(221) 00:09:52.625 fused_ordering(222) 00:09:52.625 fused_ordering(223) 00:09:52.625 fused_ordering(224) 00:09:52.625 fused_ordering(225) 00:09:52.625 fused_ordering(226) 00:09:52.625 fused_ordering(227) 00:09:52.625 fused_ordering(228) 00:09:52.625 fused_ordering(229) 00:09:52.625 fused_ordering(230) 00:09:52.625 fused_ordering(231) 00:09:52.625 fused_ordering(232) 00:09:52.625 fused_ordering(233) 00:09:52.625 fused_ordering(234) 00:09:52.625 fused_ordering(235) 00:09:52.625 fused_ordering(236) 00:09:52.625 fused_ordering(237) 00:09:52.625 fused_ordering(238) 00:09:52.625 fused_ordering(239) 00:09:52.625 fused_ordering(240) 00:09:52.625 fused_ordering(241) 00:09:52.625 fused_ordering(242) 00:09:52.625 fused_ordering(243) 00:09:52.625 fused_ordering(244) 00:09:52.625 fused_ordering(245) 00:09:52.625 fused_ordering(246) 00:09:52.625 fused_ordering(247) 00:09:52.625 fused_ordering(248) 00:09:52.625 fused_ordering(249) 00:09:52.625 fused_ordering(250) 00:09:52.625 fused_ordering(251) 00:09:52.625 fused_ordering(252) 00:09:52.625 fused_ordering(253) 00:09:52.625 fused_ordering(254) 00:09:52.625 fused_ordering(255) 00:09:52.625 fused_ordering(256) 00:09:52.625 fused_ordering(257) 00:09:52.625 fused_ordering(258) 00:09:52.625 fused_ordering(259) 00:09:52.625 fused_ordering(260) 00:09:52.625 fused_ordering(261) 00:09:52.625 fused_ordering(262) 00:09:52.625 fused_ordering(263) 00:09:52.625 fused_ordering(264) 00:09:52.625 fused_ordering(265) 00:09:52.625 fused_ordering(266) 00:09:52.625 fused_ordering(267) 00:09:52.625 fused_ordering(268) 00:09:52.625 fused_ordering(269) 00:09:52.625 fused_ordering(270) 00:09:52.625 fused_ordering(271) 00:09:52.625 fused_ordering(272) 00:09:52.625 fused_ordering(273) 00:09:52.625 fused_ordering(274) 00:09:52.625 fused_ordering(275) 00:09:52.625 fused_ordering(276) 00:09:52.625 fused_ordering(277) 00:09:52.625 fused_ordering(278) 00:09:52.625 fused_ordering(279) 00:09:52.625 fused_ordering(280) 00:09:52.625 fused_ordering(281) 00:09:52.625 fused_ordering(282) 00:09:52.625 fused_ordering(283) 00:09:52.625 fused_ordering(284) 00:09:52.625 fused_ordering(285) 00:09:52.625 fused_ordering(286) 00:09:52.625 fused_ordering(287) 00:09:52.625 fused_ordering(288) 00:09:52.625 fused_ordering(289) 00:09:52.625 fused_ordering(290) 00:09:52.625 fused_ordering(291) 00:09:52.625 fused_ordering(292) 00:09:52.625 fused_ordering(293) 00:09:52.625 fused_ordering(294) 00:09:52.625 fused_ordering(295) 00:09:52.625 fused_ordering(296) 00:09:52.625 fused_ordering(297) 00:09:52.625 fused_ordering(298) 00:09:52.625 fused_ordering(299) 00:09:52.625 fused_ordering(300) 00:09:52.625 fused_ordering(301) 00:09:52.625 fused_ordering(302) 00:09:52.625 fused_ordering(303) 00:09:52.625 fused_ordering(304) 00:09:52.625 fused_ordering(305) 00:09:52.625 fused_ordering(306) 00:09:52.625 fused_ordering(307) 00:09:52.625 fused_ordering(308) 00:09:52.625 fused_ordering(309) 00:09:52.625 fused_ordering(310) 00:09:52.625 fused_ordering(311) 00:09:52.625 fused_ordering(312) 00:09:52.625 fused_ordering(313) 00:09:52.625 fused_ordering(314) 00:09:52.625 fused_ordering(315) 00:09:52.625 fused_ordering(316) 00:09:52.625 fused_ordering(317) 00:09:52.625 fused_ordering(318) 00:09:52.625 fused_ordering(319) 00:09:52.625 fused_ordering(320) 00:09:52.625 fused_ordering(321) 00:09:52.625 fused_ordering(322) 00:09:52.625 fused_ordering(323) 00:09:52.625 fused_ordering(324) 00:09:52.625 fused_ordering(325) 00:09:52.625 fused_ordering(326) 00:09:52.625 fused_ordering(327) 00:09:52.625 fused_ordering(328) 00:09:52.625 fused_ordering(329) 00:09:52.625 fused_ordering(330) 00:09:52.625 fused_ordering(331) 00:09:52.625 fused_ordering(332) 00:09:52.625 fused_ordering(333) 00:09:52.625 fused_ordering(334) 00:09:52.625 fused_ordering(335) 00:09:52.625 fused_ordering(336) 00:09:52.625 fused_ordering(337) 00:09:52.625 fused_ordering(338) 00:09:52.625 fused_ordering(339) 00:09:52.625 fused_ordering(340) 00:09:52.625 fused_ordering(341) 00:09:52.625 fused_ordering(342) 00:09:52.625 fused_ordering(343) 00:09:52.625 fused_ordering(344) 00:09:52.625 fused_ordering(345) 00:09:52.625 fused_ordering(346) 00:09:52.625 fused_ordering(347) 00:09:52.625 fused_ordering(348) 00:09:52.625 fused_ordering(349) 00:09:52.625 fused_ordering(350) 00:09:52.625 fused_ordering(351) 00:09:52.625 fused_ordering(352) 00:09:52.625 fused_ordering(353) 00:09:52.625 fused_ordering(354) 00:09:52.625 fused_ordering(355) 00:09:52.625 fused_ordering(356) 00:09:52.625 fused_ordering(357) 00:09:52.625 fused_ordering(358) 00:09:52.625 fused_ordering(359) 00:09:52.625 fused_ordering(360) 00:09:52.625 fused_ordering(361) 00:09:52.625 fused_ordering(362) 00:09:52.625 fused_ordering(363) 00:09:52.625 fused_ordering(364) 00:09:52.625 fused_ordering(365) 00:09:52.625 fused_ordering(366) 00:09:52.625 fused_ordering(367) 00:09:52.625 fused_ordering(368) 00:09:52.625 fused_ordering(369) 00:09:52.625 fused_ordering(370) 00:09:52.625 fused_ordering(371) 00:09:52.625 fused_ordering(372) 00:09:52.625 fused_ordering(373) 00:09:52.625 fused_ordering(374) 00:09:52.625 fused_ordering(375) 00:09:52.625 fused_ordering(376) 00:09:52.625 fused_ordering(377) 00:09:52.625 fused_ordering(378) 00:09:52.625 fused_ordering(379) 00:09:52.625 fused_ordering(380) 00:09:52.625 fused_ordering(381) 00:09:52.625 fused_ordering(382) 00:09:52.625 fused_ordering(383) 00:09:52.625 fused_ordering(384) 00:09:52.625 fused_ordering(385) 00:09:52.625 fused_ordering(386) 00:09:52.625 fused_ordering(387) 00:09:52.625 fused_ordering(388) 00:09:52.625 fused_ordering(389) 00:09:52.625 fused_ordering(390) 00:09:52.625 fused_ordering(391) 00:09:52.625 fused_ordering(392) 00:09:52.625 fused_ordering(393) 00:09:52.625 fused_ordering(394) 00:09:52.625 fused_ordering(395) 00:09:52.625 fused_ordering(396) 00:09:52.625 fused_ordering(397) 00:09:52.625 fused_ordering(398) 00:09:52.625 fused_ordering(399) 00:09:52.625 fused_ordering(400) 00:09:52.625 fused_ordering(401) 00:09:52.625 fused_ordering(402) 00:09:52.625 fused_ordering(403) 00:09:52.625 fused_ordering(404) 00:09:52.625 fused_ordering(405) 00:09:52.625 fused_ordering(406) 00:09:52.625 fused_ordering(407) 00:09:52.625 fused_ordering(408) 00:09:52.625 fused_ordering(409) 00:09:52.625 fused_ordering(410) 00:09:52.884 fused_ordering(411) 00:09:52.884 fused_ordering(412) 00:09:52.884 fused_ordering(413) 00:09:52.884 fused_ordering(414) 00:09:52.884 fused_ordering(415) 00:09:52.884 fused_ordering(416) 00:09:52.884 fused_ordering(417) 00:09:52.884 fused_ordering(418) 00:09:52.884 fused_ordering(419) 00:09:52.884 fused_ordering(420) 00:09:52.884 fused_ordering(421) 00:09:52.884 fused_ordering(422) 00:09:52.884 fused_ordering(423) 00:09:52.884 fused_ordering(424) 00:09:52.884 fused_ordering(425) 00:09:52.884 fused_ordering(426) 00:09:52.884 fused_ordering(427) 00:09:52.884 fused_ordering(428) 00:09:52.884 fused_ordering(429) 00:09:52.884 fused_ordering(430) 00:09:52.884 fused_ordering(431) 00:09:52.884 fused_ordering(432) 00:09:52.884 fused_ordering(433) 00:09:52.884 fused_ordering(434) 00:09:52.884 fused_ordering(435) 00:09:52.884 fused_ordering(436) 00:09:52.884 fused_ordering(437) 00:09:52.884 fused_ordering(438) 00:09:52.884 fused_ordering(439) 00:09:52.884 fused_ordering(440) 00:09:52.884 fused_ordering(441) 00:09:52.884 fused_ordering(442) 00:09:52.884 fused_ordering(443) 00:09:52.884 fused_ordering(444) 00:09:52.884 fused_ordering(445) 00:09:52.884 fused_ordering(446) 00:09:52.884 fused_ordering(447) 00:09:52.884 fused_ordering(448) 00:09:52.884 fused_ordering(449) 00:09:52.884 fused_ordering(450) 00:09:52.884 fused_ordering(451) 00:09:52.884 fused_ordering(452) 00:09:52.884 fused_ordering(453) 00:09:52.884 fused_ordering(454) 00:09:52.884 fused_ordering(455) 00:09:52.884 fused_ordering(456) 00:09:52.884 fused_ordering(457) 00:09:52.884 fused_ordering(458) 00:09:52.884 fused_ordering(459) 00:09:52.884 fused_ordering(460) 00:09:52.884 fused_ordering(461) 00:09:52.884 fused_ordering(462) 00:09:52.884 fused_ordering(463) 00:09:52.884 fused_ordering(464) 00:09:52.884 fused_ordering(465) 00:09:52.884 fused_ordering(466) 00:09:52.884 fused_ordering(467) 00:09:52.884 fused_ordering(468) 00:09:52.884 fused_ordering(469) 00:09:52.884 fused_ordering(470) 00:09:52.884 fused_ordering(471) 00:09:52.884 fused_ordering(472) 00:09:52.884 fused_ordering(473) 00:09:52.884 fused_ordering(474) 00:09:52.884 fused_ordering(475) 00:09:52.884 fused_ordering(476) 00:09:52.884 fused_ordering(477) 00:09:52.884 fused_ordering(478) 00:09:52.884 fused_ordering(479) 00:09:52.884 fused_ordering(480) 00:09:52.884 fused_ordering(481) 00:09:52.884 fused_ordering(482) 00:09:52.884 fused_ordering(483) 00:09:52.884 fused_ordering(484) 00:09:52.884 fused_ordering(485) 00:09:52.884 fused_ordering(486) 00:09:52.884 fused_ordering(487) 00:09:52.884 fused_ordering(488) 00:09:52.884 fused_ordering(489) 00:09:52.884 fused_ordering(490) 00:09:52.884 fused_ordering(491) 00:09:52.884 fused_ordering(492) 00:09:52.884 fused_ordering(493) 00:09:52.884 fused_ordering(494) 00:09:52.884 fused_ordering(495) 00:09:52.884 fused_ordering(496) 00:09:52.884 fused_ordering(497) 00:09:52.884 fused_ordering(498) 00:09:52.884 fused_ordering(499) 00:09:52.884 fused_ordering(500) 00:09:52.884 fused_ordering(501) 00:09:52.884 fused_ordering(502) 00:09:52.884 fused_ordering(503) 00:09:52.884 fused_ordering(504) 00:09:52.884 fused_ordering(505) 00:09:52.884 fused_ordering(506) 00:09:52.884 fused_ordering(507) 00:09:52.884 fused_ordering(508) 00:09:52.884 fused_ordering(509) 00:09:52.884 fused_ordering(510) 00:09:52.884 fused_ordering(511) 00:09:52.884 fused_ordering(512) 00:09:52.884 fused_ordering(513) 00:09:52.884 fused_ordering(514) 00:09:52.884 fused_ordering(515) 00:09:52.884 fused_ordering(516) 00:09:52.884 fused_ordering(517) 00:09:52.884 fused_ordering(518) 00:09:52.884 fused_ordering(519) 00:09:52.884 fused_ordering(520) 00:09:52.884 fused_ordering(521) 00:09:52.884 fused_ordering(522) 00:09:52.884 fused_ordering(523) 00:09:52.884 fused_ordering(524) 00:09:52.884 fused_ordering(525) 00:09:52.884 fused_ordering(526) 00:09:52.884 fused_ordering(527) 00:09:52.884 fused_ordering(528) 00:09:52.884 fused_ordering(529) 00:09:52.884 fused_ordering(530) 00:09:52.884 fused_ordering(531) 00:09:52.884 fused_ordering(532) 00:09:52.884 fused_ordering(533) 00:09:52.884 fused_ordering(534) 00:09:52.884 fused_ordering(535) 00:09:52.884 fused_ordering(536) 00:09:52.884 fused_ordering(537) 00:09:52.884 fused_ordering(538) 00:09:52.884 fused_ordering(539) 00:09:52.884 fused_ordering(540) 00:09:52.884 fused_ordering(541) 00:09:52.884 fused_ordering(542) 00:09:52.884 fused_ordering(543) 00:09:52.884 fused_ordering(544) 00:09:52.884 fused_ordering(545) 00:09:52.884 fused_ordering(546) 00:09:52.884 fused_ordering(547) 00:09:52.884 fused_ordering(548) 00:09:52.884 fused_ordering(549) 00:09:52.884 fused_ordering(550) 00:09:52.884 fused_ordering(551) 00:09:52.884 fused_ordering(552) 00:09:52.884 fused_ordering(553) 00:09:52.884 fused_ordering(554) 00:09:52.884 fused_ordering(555) 00:09:52.884 fused_ordering(556) 00:09:52.884 fused_ordering(557) 00:09:52.884 fused_ordering(558) 00:09:52.884 fused_ordering(559) 00:09:52.884 fused_ordering(560) 00:09:52.884 fused_ordering(561) 00:09:52.884 fused_ordering(562) 00:09:52.884 fused_ordering(563) 00:09:52.884 fused_ordering(564) 00:09:52.884 fused_ordering(565) 00:09:52.884 fused_ordering(566) 00:09:52.884 fused_ordering(567) 00:09:52.884 fused_ordering(568) 00:09:52.884 fused_ordering(569) 00:09:52.884 fused_ordering(570) 00:09:52.884 fused_ordering(571) 00:09:52.884 fused_ordering(572) 00:09:52.884 fused_ordering(573) 00:09:52.884 fused_ordering(574) 00:09:52.884 fused_ordering(575) 00:09:52.884 fused_ordering(576) 00:09:52.884 fused_ordering(577) 00:09:52.884 fused_ordering(578) 00:09:52.884 fused_ordering(579) 00:09:52.884 fused_ordering(580) 00:09:52.884 fused_ordering(581) 00:09:52.884 fused_ordering(582) 00:09:52.884 fused_ordering(583) 00:09:52.884 fused_ordering(584) 00:09:52.884 fused_ordering(585) 00:09:52.884 fused_ordering(586) 00:09:52.884 fused_ordering(587) 00:09:52.884 fused_ordering(588) 00:09:52.884 fused_ordering(589) 00:09:52.884 fused_ordering(590) 00:09:52.884 fused_ordering(591) 00:09:52.884 fused_ordering(592) 00:09:52.884 fused_ordering(593) 00:09:52.884 fused_ordering(594) 00:09:52.884 fused_ordering(595) 00:09:52.884 fused_ordering(596) 00:09:52.884 fused_ordering(597) 00:09:52.884 fused_ordering(598) 00:09:52.884 fused_ordering(599) 00:09:52.884 fused_ordering(600) 00:09:52.884 fused_ordering(601) 00:09:52.884 fused_ordering(602) 00:09:52.884 fused_ordering(603) 00:09:52.884 fused_ordering(604) 00:09:52.884 fused_ordering(605) 00:09:52.884 fused_ordering(606) 00:09:52.884 fused_ordering(607) 00:09:52.884 fused_ordering(608) 00:09:52.884 fused_ordering(609) 00:09:52.884 fused_ordering(610) 00:09:52.884 fused_ordering(611) 00:09:52.884 fused_ordering(612) 00:09:52.884 fused_ordering(613) 00:09:52.884 fused_ordering(614) 00:09:52.884 fused_ordering(615) 00:09:53.459 fused_ordering(616) 00:09:53.459 fused_ordering(617) 00:09:53.459 fused_ordering(618) 00:09:53.459 fused_ordering(619) 00:09:53.459 fused_ordering(620) 00:09:53.459 fused_ordering(621) 00:09:53.459 fused_ordering(622) 00:09:53.459 fused_ordering(623) 00:09:53.459 fused_ordering(624) 00:09:53.459 fused_ordering(625) 00:09:53.459 fused_ordering(626) 00:09:53.459 fused_ordering(627) 00:09:53.459 fused_ordering(628) 00:09:53.459 fused_ordering(629) 00:09:53.459 fused_ordering(630) 00:09:53.459 fused_ordering(631) 00:09:53.459 fused_ordering(632) 00:09:53.459 fused_ordering(633) 00:09:53.459 fused_ordering(634) 00:09:53.459 fused_ordering(635) 00:09:53.459 fused_ordering(636) 00:09:53.459 fused_ordering(637) 00:09:53.459 fused_ordering(638) 00:09:53.459 fused_ordering(639) 00:09:53.459 fused_ordering(640) 00:09:53.459 fused_ordering(641) 00:09:53.459 fused_ordering(642) 00:09:53.459 fused_ordering(643) 00:09:53.459 fused_ordering(644) 00:09:53.459 fused_ordering(645) 00:09:53.459 fused_ordering(646) 00:09:53.459 fused_ordering(647) 00:09:53.459 fused_ordering(648) 00:09:53.459 fused_ordering(649) 00:09:53.459 fused_ordering(650) 00:09:53.459 fused_ordering(651) 00:09:53.459 fused_ordering(652) 00:09:53.459 fused_ordering(653) 00:09:53.459 fused_ordering(654) 00:09:53.459 fused_ordering(655) 00:09:53.459 fused_ordering(656) 00:09:53.459 fused_ordering(657) 00:09:53.459 fused_ordering(658) 00:09:53.459 fused_ordering(659) 00:09:53.459 fused_ordering(660) 00:09:53.459 fused_ordering(661) 00:09:53.459 fused_ordering(662) 00:09:53.459 fused_ordering(663) 00:09:53.459 fused_ordering(664) 00:09:53.459 fused_ordering(665) 00:09:53.459 fused_ordering(666) 00:09:53.459 fused_ordering(667) 00:09:53.459 fused_ordering(668) 00:09:53.459 fused_ordering(669) 00:09:53.459 fused_ordering(670) 00:09:53.459 fused_ordering(671) 00:09:53.459 fused_ordering(672) 00:09:53.459 fused_ordering(673) 00:09:53.459 fused_ordering(674) 00:09:53.459 fused_ordering(675) 00:09:53.459 fused_ordering(676) 00:09:53.459 fused_ordering(677) 00:09:53.459 fused_ordering(678) 00:09:53.459 fused_ordering(679) 00:09:53.459 fused_ordering(680) 00:09:53.459 fused_ordering(681) 00:09:53.459 fused_ordering(682) 00:09:53.459 fused_ordering(683) 00:09:53.459 fused_ordering(684) 00:09:53.459 fused_ordering(685) 00:09:53.459 fused_ordering(686) 00:09:53.459 fused_ordering(687) 00:09:53.459 fused_ordering(688) 00:09:53.459 fused_ordering(689) 00:09:53.459 fused_ordering(690) 00:09:53.459 fused_ordering(691) 00:09:53.459 fused_ordering(692) 00:09:53.459 fused_ordering(693) 00:09:53.459 fused_ordering(694) 00:09:53.459 fused_ordering(695) 00:09:53.459 fused_ordering(696) 00:09:53.459 fused_ordering(697) 00:09:53.459 fused_ordering(698) 00:09:53.459 fused_ordering(699) 00:09:53.459 fused_ordering(700) 00:09:53.459 fused_ordering(701) 00:09:53.459 fused_ordering(702) 00:09:53.459 fused_ordering(703) 00:09:53.459 fused_ordering(704) 00:09:53.459 fused_ordering(705) 00:09:53.459 fused_ordering(706) 00:09:53.459 fused_ordering(707) 00:09:53.459 fused_ordering(708) 00:09:53.459 fused_ordering(709) 00:09:53.459 fused_ordering(710) 00:09:53.459 fused_ordering(711) 00:09:53.459 fused_ordering(712) 00:09:53.459 fused_ordering(713) 00:09:53.459 fused_ordering(714) 00:09:53.459 fused_ordering(715) 00:09:53.459 fused_ordering(716) 00:09:53.459 fused_ordering(717) 00:09:53.459 fused_ordering(718) 00:09:53.459 fused_ordering(719) 00:09:53.459 fused_ordering(720) 00:09:53.459 fused_ordering(721) 00:09:53.459 fused_ordering(722) 00:09:53.459 fused_ordering(723) 00:09:53.459 fused_ordering(724) 00:09:53.459 fused_ordering(725) 00:09:53.459 fused_ordering(726) 00:09:53.459 fused_ordering(727) 00:09:53.459 fused_ordering(728) 00:09:53.459 fused_ordering(729) 00:09:53.459 fused_ordering(730) 00:09:53.459 fused_ordering(731) 00:09:53.459 fused_ordering(732) 00:09:53.459 fused_ordering(733) 00:09:53.459 fused_ordering(734) 00:09:53.459 fused_ordering(735) 00:09:53.459 fused_ordering(736) 00:09:53.459 fused_ordering(737) 00:09:53.459 fused_ordering(738) 00:09:53.459 fused_ordering(739) 00:09:53.459 fused_ordering(740) 00:09:53.459 fused_ordering(741) 00:09:53.459 fused_ordering(742) 00:09:53.459 fused_ordering(743) 00:09:53.459 fused_ordering(744) 00:09:53.459 fused_ordering(745) 00:09:53.459 fused_ordering(746) 00:09:53.459 fused_ordering(747) 00:09:53.459 fused_ordering(748) 00:09:53.459 fused_ordering(749) 00:09:53.459 fused_ordering(750) 00:09:53.459 fused_ordering(751) 00:09:53.459 fused_ordering(752) 00:09:53.459 fused_ordering(753) 00:09:53.459 fused_ordering(754) 00:09:53.459 fused_ordering(755) 00:09:53.459 fused_ordering(756) 00:09:53.459 fused_ordering(757) 00:09:53.459 fused_ordering(758) 00:09:53.459 fused_ordering(759) 00:09:53.459 fused_ordering(760) 00:09:53.459 fused_ordering(761) 00:09:53.459 fused_ordering(762) 00:09:53.459 fused_ordering(763) 00:09:53.459 fused_ordering(764) 00:09:53.459 fused_ordering(765) 00:09:53.459 fused_ordering(766) 00:09:53.459 fused_ordering(767) 00:09:53.459 fused_ordering(768) 00:09:53.459 fused_ordering(769) 00:09:53.459 fused_ordering(770) 00:09:53.459 fused_ordering(771) 00:09:53.459 fused_ordering(772) 00:09:53.459 fused_ordering(773) 00:09:53.459 fused_ordering(774) 00:09:53.459 fused_ordering(775) 00:09:53.459 fused_ordering(776) 00:09:53.459 fused_ordering(777) 00:09:53.459 fused_ordering(778) 00:09:53.459 fused_ordering(779) 00:09:53.459 fused_ordering(780) 00:09:53.459 fused_ordering(781) 00:09:53.459 fused_ordering(782) 00:09:53.460 fused_ordering(783) 00:09:53.460 fused_ordering(784) 00:09:53.460 fused_ordering(785) 00:09:53.460 fused_ordering(786) 00:09:53.460 fused_ordering(787) 00:09:53.460 fused_ordering(788) 00:09:53.460 fused_ordering(789) 00:09:53.460 fused_ordering(790) 00:09:53.460 fused_ordering(791) 00:09:53.460 fused_ordering(792) 00:09:53.460 fused_ordering(793) 00:09:53.460 fused_ordering(794) 00:09:53.460 fused_ordering(795) 00:09:53.460 fused_ordering(796) 00:09:53.460 fused_ordering(797) 00:09:53.460 fused_ordering(798) 00:09:53.460 fused_ordering(799) 00:09:53.460 fused_ordering(800) 00:09:53.460 fused_ordering(801) 00:09:53.460 fused_ordering(802) 00:09:53.460 fused_ordering(803) 00:09:53.460 fused_ordering(804) 00:09:53.460 fused_ordering(805) 00:09:53.460 fused_ordering(806) 00:09:53.460 fused_ordering(807) 00:09:53.460 fused_ordering(808) 00:09:53.460 fused_ordering(809) 00:09:53.460 fused_ordering(810) 00:09:53.460 fused_ordering(811) 00:09:53.460 fused_ordering(812) 00:09:53.460 fused_ordering(813) 00:09:53.460 fused_ordering(814) 00:09:53.460 fused_ordering(815) 00:09:53.460 fused_ordering(816) 00:09:53.460 fused_ordering(817) 00:09:53.460 fused_ordering(818) 00:09:53.460 fused_ordering(819) 00:09:53.460 fused_ordering(820) 00:09:54.392 fused_ordering(821) 00:09:54.392 fused_ordering(822) 00:09:54.392 fused_ordering(823) 00:09:54.392 fused_ordering(824) 00:09:54.392 fused_ordering(825) 00:09:54.392 fused_ordering(826) 00:09:54.392 fused_ordering(827) 00:09:54.392 fused_ordering(828) 00:09:54.392 fused_ordering(829) 00:09:54.392 fused_ordering(830) 00:09:54.392 fused_ordering(831) 00:09:54.392 fused_ordering(832) 00:09:54.392 fused_ordering(833) 00:09:54.392 fused_ordering(834) 00:09:54.392 fused_ordering(835) 00:09:54.392 fused_ordering(836) 00:09:54.392 fused_ordering(837) 00:09:54.392 fused_ordering(838) 00:09:54.392 fused_ordering(839) 00:09:54.392 fused_ordering(840) 00:09:54.392 fused_ordering(841) 00:09:54.392 fused_ordering(842) 00:09:54.392 fused_ordering(843) 00:09:54.392 fused_ordering(844) 00:09:54.392 fused_ordering(845) 00:09:54.392 fused_ordering(846) 00:09:54.392 fused_ordering(847) 00:09:54.392 fused_ordering(848) 00:09:54.392 fused_ordering(849) 00:09:54.392 fused_ordering(850) 00:09:54.392 fused_ordering(851) 00:09:54.392 fused_ordering(852) 00:09:54.392 fused_ordering(853) 00:09:54.392 fused_ordering(854) 00:09:54.392 fused_ordering(855) 00:09:54.392 fused_ordering(856) 00:09:54.392 fused_ordering(857) 00:09:54.392 fused_ordering(858) 00:09:54.392 fused_ordering(859) 00:09:54.392 fused_ordering(860) 00:09:54.392 fused_ordering(861) 00:09:54.392 fused_ordering(862) 00:09:54.392 fused_ordering(863) 00:09:54.392 fused_ordering(864) 00:09:54.392 fused_ordering(865) 00:09:54.392 fused_ordering(866) 00:09:54.392 fused_ordering(867) 00:09:54.392 fused_ordering(868) 00:09:54.392 fused_ordering(869) 00:09:54.392 fused_ordering(870) 00:09:54.392 fused_ordering(871) 00:09:54.392 fused_ordering(872) 00:09:54.392 fused_ordering(873) 00:09:54.392 fused_ordering(874) 00:09:54.392 fused_ordering(875) 00:09:54.392 fused_ordering(876) 00:09:54.392 fused_ordering(877) 00:09:54.392 fused_ordering(878) 00:09:54.392 fused_ordering(879) 00:09:54.392 fused_ordering(880) 00:09:54.392 fused_ordering(881) 00:09:54.392 fused_ordering(882) 00:09:54.392 fused_ordering(883) 00:09:54.392 fused_ordering(884) 00:09:54.392 fused_ordering(885) 00:09:54.392 fused_ordering(886) 00:09:54.392 fused_ordering(887) 00:09:54.392 fused_ordering(888) 00:09:54.392 fused_ordering(889) 00:09:54.392 fused_ordering(890) 00:09:54.392 fused_ordering(891) 00:09:54.392 fused_ordering(892) 00:09:54.392 fused_ordering(893) 00:09:54.392 fused_ordering(894) 00:09:54.392 fused_ordering(895) 00:09:54.392 fused_ordering(896) 00:09:54.392 fused_ordering(897) 00:09:54.392 fused_ordering(898) 00:09:54.392 fused_ordering(899) 00:09:54.392 fused_ordering(900) 00:09:54.392 fused_ordering(901) 00:09:54.392 fused_ordering(902) 00:09:54.392 fused_ordering(903) 00:09:54.392 fused_ordering(904) 00:09:54.392 fused_ordering(905) 00:09:54.392 fused_ordering(906) 00:09:54.393 fused_ordering(907) 00:09:54.393 fused_ordering(908) 00:09:54.393 fused_ordering(909) 00:09:54.393 fused_ordering(910) 00:09:54.393 fused_ordering(911) 00:09:54.393 fused_ordering(912) 00:09:54.393 fused_ordering(913) 00:09:54.393 fused_ordering(914) 00:09:54.393 fused_ordering(915) 00:09:54.393 fused_ordering(916) 00:09:54.393 fused_ordering(917) 00:09:54.393 fused_ordering(918) 00:09:54.393 fused_ordering(919) 00:09:54.393 fused_ordering(920) 00:09:54.393 fused_ordering(921) 00:09:54.393 fused_ordering(922) 00:09:54.393 fused_ordering(923) 00:09:54.393 fused_ordering(924) 00:09:54.393 fused_ordering(925) 00:09:54.393 fused_ordering(926) 00:09:54.393 fused_ordering(927) 00:09:54.393 fused_ordering(928) 00:09:54.393 fused_ordering(929) 00:09:54.393 fused_ordering(930) 00:09:54.393 fused_ordering(931) 00:09:54.393 fused_ordering(932) 00:09:54.393 fused_ordering(933) 00:09:54.393 fused_ordering(934) 00:09:54.393 fused_ordering(935) 00:09:54.393 fused_ordering(936) 00:09:54.393 fused_ordering(937) 00:09:54.393 fused_ordering(938) 00:09:54.393 fused_ordering(939) 00:09:54.393 fused_ordering(940) 00:09:54.393 fused_ordering(941) 00:09:54.393 fused_ordering(942) 00:09:54.393 fused_ordering(943) 00:09:54.393 fused_ordering(944) 00:09:54.393 fused_ordering(945) 00:09:54.393 fused_ordering(946) 00:09:54.393 fused_ordering(947) 00:09:54.393 fused_ordering(948) 00:09:54.393 fused_ordering(949) 00:09:54.393 fused_ordering(950) 00:09:54.393 fused_ordering(951) 00:09:54.393 fused_ordering(952) 00:09:54.393 fused_ordering(953) 00:09:54.393 fused_ordering(954) 00:09:54.393 fused_ordering(955) 00:09:54.393 fused_ordering(956) 00:09:54.393 fused_ordering(957) 00:09:54.393 fused_ordering(958) 00:09:54.393 fused_ordering(959) 00:09:54.393 fused_ordering(960) 00:09:54.393 fused_ordering(961) 00:09:54.393 fused_ordering(962) 00:09:54.393 fused_ordering(963) 00:09:54.393 fused_ordering(964) 00:09:54.393 fused_ordering(965) 00:09:54.393 fused_ordering(966) 00:09:54.393 fused_ordering(967) 00:09:54.393 fused_ordering(968) 00:09:54.393 fused_ordering(969) 00:09:54.393 fused_ordering(970) 00:09:54.393 fused_ordering(971) 00:09:54.393 fused_ordering(972) 00:09:54.393 fused_ordering(973) 00:09:54.393 fused_ordering(974) 00:09:54.393 fused_ordering(975) 00:09:54.393 fused_ordering(976) 00:09:54.393 fused_ordering(977) 00:09:54.393 fused_ordering(978) 00:09:54.393 fused_ordering(979) 00:09:54.393 fused_ordering(980) 00:09:54.393 fused_ordering(981) 00:09:54.393 fused_ordering(982) 00:09:54.393 fused_ordering(983) 00:09:54.393 fused_ordering(984) 00:09:54.393 fused_ordering(985) 00:09:54.393 fused_ordering(986) 00:09:54.393 fused_ordering(987) 00:09:54.393 fused_ordering(988) 00:09:54.393 fused_ordering(989) 00:09:54.393 fused_ordering(990) 00:09:54.393 fused_ordering(991) 00:09:54.393 fused_ordering(992) 00:09:54.393 fused_ordering(993) 00:09:54.393 fused_ordering(994) 00:09:54.393 fused_ordering(995) 00:09:54.393 fused_ordering(996) 00:09:54.393 fused_ordering(997) 00:09:54.393 fused_ordering(998) 00:09:54.393 fused_ordering(999) 00:09:54.393 fused_ordering(1000) 00:09:54.393 fused_ordering(1001) 00:09:54.393 fused_ordering(1002) 00:09:54.393 fused_ordering(1003) 00:09:54.393 fused_ordering(1004) 00:09:54.393 fused_ordering(1005) 00:09:54.393 fused_ordering(1006) 00:09:54.393 fused_ordering(1007) 00:09:54.393 fused_ordering(1008) 00:09:54.393 fused_ordering(1009) 00:09:54.393 fused_ordering(1010) 00:09:54.393 fused_ordering(1011) 00:09:54.393 fused_ordering(1012) 00:09:54.393 fused_ordering(1013) 00:09:54.393 fused_ordering(1014) 00:09:54.393 fused_ordering(1015) 00:09:54.393 fused_ordering(1016) 00:09:54.393 fused_ordering(1017) 00:09:54.393 fused_ordering(1018) 00:09:54.393 fused_ordering(1019) 00:09:54.393 fused_ordering(1020) 00:09:54.393 fused_ordering(1021) 00:09:54.393 fused_ordering(1022) 00:09:54.393 fused_ordering(1023) 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.393 rmmod nvme_tcp 00:09:54.393 rmmod nvme_fabrics 00:09:54.393 rmmod nvme_keyring 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2958593 ']' 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2958593 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2958593 ']' 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2958593 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2958593 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2958593' 00:09:54.393 killing process with pid 2958593 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2958593 00:09:54.393 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2958593 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.652 11:36:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.556 11:36:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.556 00:09:56.556 real 0m8.188s 00:09:56.556 user 0m5.673s 00:09:56.556 sys 0m3.494s 00:09:56.556 11:36:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.556 11:36:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:56.556 ************************************ 00:09:56.556 END TEST nvmf_fused_ordering 00:09:56.556 ************************************ 00:09:56.556 11:36:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.556 11:36:04 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:56.556 11:36:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.556 11:36:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.556 11:36:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.556 ************************************ 00:09:56.556 START TEST nvmf_delete_subsystem 00:09:56.556 ************************************ 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:56.556 * Looking for test storage... 00:09:56.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.556 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.815 11:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:58.716 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:58.717 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:58.717 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:58.717 Found net devices under 0000:84:00.0: cvl_0_0 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:58.717 Found net devices under 0000:84:00.1: cvl_0_1 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.717 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:09:58.975 00:09:58.975 --- 10.0.0.2 ping statistics --- 00:09:58.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.975 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:09:58.975 00:09:58.975 --- 10.0.0.1 ping statistics --- 00:09:58.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.975 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2961083 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2961083 00:09:58.975 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2961083 ']' 00:09:58.976 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.976 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.976 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.976 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.976 11:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.976 [2024-07-15 11:36:06.802849] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:58.976 [2024-07-15 11:36:06.802941] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.976 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.976 [2024-07-15 11:36:06.867628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:59.234 [2024-07-15 11:36:06.972201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.234 [2024-07-15 11:36:06.972249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.234 [2024-07-15 11:36:06.972277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.234 [2024-07-15 11:36:06.972289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.234 [2024-07-15 11:36:06.972298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.234 [2024-07-15 11:36:06.972391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.234 [2024-07-15 11:36:06.972397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 [2024-07-15 11:36:07.120129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 [2024-07-15 11:36:07.136325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 NULL1 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 Delay0 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2961109 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:59.234 11:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:59.234 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.234 [2024-07-15 11:36:07.210957] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:01.760 11:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.760 11:36:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.760 11:36:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 [2024-07-15 11:36:09.293054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5af4000c00 is same with the state(5) to be set 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 starting I/O failed: -6 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.760 Write completed with error (sct=0, sc=8) 00:10:01.760 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 starting I/O failed: -6 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 starting I/O failed: -6 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 starting I/O failed: -6 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 starting I/O failed: -6 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 [2024-07-15 11:36:09.293852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5025c0 is same with the state(5) to be set 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:01.761 Write completed with error (sct=0, sc=8) 00:10:01.761 Read completed with error (sct=0, sc=8) 00:10:02.327 [2024-07-15 11:36:10.266649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x503ac0 is same with the state(5) to be set 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 [2024-07-15 11:36:10.295800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5027a0 is same with the state(5) to be set 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 [2024-07-15 11:36:10.296057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5023e0 is same with the state(5) to be set 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 [2024-07-15 11:36:10.296252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5af400d760 is same with the state(5) to be set 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Read completed with error (sct=0, sc=8) 00:10:02.327 Write completed with error (sct=0, sc=8) 00:10:02.327 [2024-07-15 11:36:10.296675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5af400cfe0 is same with the state(5) to be set 00:10:02.327 Initializing NVMe Controllers 00:10:02.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:02.327 Controller IO queue size 128, less than required. 00:10:02.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:02.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:02.328 Initialization complete. Launching workers. 00:10:02.328 ======================================================== 00:10:02.328 Latency(us) 00:10:02.328 Device Information : IOPS MiB/s Average min max 00:10:02.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.66 0.08 902883.71 447.81 2003675.19 00:10:02.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.72 0.08 904176.26 646.87 1012861.93 00:10:02.328 ======================================================== 00:10:02.328 Total : 339.38 0.17 903514.87 447.81 2003675.19 00:10:02.328 00:10:02.328 [2024-07-15 11:36:10.297536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x503ac0 (9): Bad file descriptor 00:10:02.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:02.328 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.328 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:02.328 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2961109 00:10:02.328 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2961109 00:10:02.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2961109) - No such process 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2961109 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2961109 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2961109 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.891 [2024-07-15 11:36:10.820557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2962136 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:02.891 11:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.891 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.148 [2024-07-15 11:36:10.882599] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:03.406 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.406 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:03.406 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.994 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.994 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:03.994 11:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.559 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.559 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:04.559 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.124 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.124 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:05.124 11:36:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.390 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.390 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:05.390 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.961 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.961 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:05.961 11:36:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.218 Initializing NVMe Controllers 00:10:06.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.218 Controller IO queue size 128, less than required. 00:10:06.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:06.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:06.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:06.218 Initialization complete. Launching workers. 00:10:06.218 ======================================================== 00:10:06.218 Latency(us) 00:10:06.218 Device Information : IOPS MiB/s Average min max 00:10:06.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003682.15 1000188.70 1012360.61 00:10:06.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004604.45 1000239.47 1012402.93 00:10:06.218 ======================================================== 00:10:06.218 Total : 256.00 0.12 1004143.30 1000188.70 1012402.93 00:10:06.218 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2962136 00:10:06.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2962136) - No such process 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2962136 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.476 rmmod nvme_tcp 00:10:06.476 rmmod nvme_fabrics 00:10:06.476 rmmod nvme_keyring 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2961083 ']' 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2961083 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2961083 ']' 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2961083 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2961083 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2961083' 00:10:06.476 killing process with pid 2961083 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2961083 00:10:06.476 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2961083 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.736 11:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.274 11:36:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.274 00:10:09.274 real 0m12.242s 00:10:09.274 user 0m27.477s 00:10:09.274 sys 0m2.997s 00:10:09.274 11:36:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.274 11:36:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.274 ************************************ 00:10:09.274 END TEST nvmf_delete_subsystem 00:10:09.274 ************************************ 00:10:09.274 11:36:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.274 11:36:16 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:09.274 11:36:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.274 11:36:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.274 11:36:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.274 ************************************ 00:10:09.274 START TEST nvmf_ns_masking 00:10:09.274 ************************************ 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:09.274 * Looking for test storage... 00:10:09.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.274 11:36:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c92abf7a-5a9f-4ed8-b779-0334c704af16 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8eff108d-caa0-4cb4-af0e-2de10a04b846 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bc09df42-a221-43f1-951f-498ef8d5fcae 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.275 11:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:11.175 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:11.175 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:11.175 Found net devices under 0000:84:00.0: cvl_0_0 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:11.175 Found net devices under 0000:84:00.1: cvl_0_1 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.175 11:36:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:11.175 00:10:11.175 --- 10.0.0.2 ping statistics --- 00:10:11.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.175 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:11.175 00:10:11.175 --- 10.0.0.1 ping statistics --- 00:10:11.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.175 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.175 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2964501 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2964501 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2964501 ']' 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.176 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:11.433 [2024-07-15 11:36:19.204109] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:11.433 [2024-07-15 11:36:19.204186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.433 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.433 [2024-07-15 11:36:19.266338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.433 [2024-07-15 11:36:19.365810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.433 [2024-07-15 11:36:19.365868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.434 [2024-07-15 11:36:19.365897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.434 [2024-07-15 11:36:19.365908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.434 [2024-07-15 11:36:19.365918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.434 [2024-07-15 11:36:19.365942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.691 11:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.948 [2024-07-15 11:36:19.775048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.948 11:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:11.948 11:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:11.948 11:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:12.205 Malloc1 00:10:12.205 11:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:12.462 Malloc2 00:10:12.462 11:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.027 11:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:13.027 11:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.285 [2024-07-15 11:36:21.183841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.285 11:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:13.285 11:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc09df42-a221-43f1-951f-498ef8d5fcae -a 10.0.0.2 -s 4420 -i 4 00:10:13.543 11:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.543 11:36:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.543 11:36:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.543 11:36:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:13.543 11:36:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.441 11:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:15.700 [ 0]:0x1 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9623a37df0c8475f98cf8e280f97a26d 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9623a37df0c8475f98cf8e280f97a26d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.700 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:15.958 [ 0]:0x1 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9623a37df0c8475f98cf8e280f97a26d 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9623a37df0c8475f98cf8e280f97a26d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:15.958 [ 1]:0x2 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.958 11:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.524 11:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:16.524 11:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:16.524 11:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc09df42-a221-43f1-951f-498ef8d5fcae -a 10.0.0.2 -s 4420 -i 4 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:16.781 11:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.309 [ 0]:0x2 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.309 11:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.309 [ 0]:0x1 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9623a37df0c8475f98cf8e280f97a26d 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9623a37df0c8475f98cf8e280f97a26d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.309 [ 1]:0x2 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.309 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.567 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.568 [ 0]:0x2 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.568 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.824 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:19.824 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.824 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:19.824 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.824 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc09df42-a221-43f1-951f-498ef8d5fcae -a 10.0.0.2 -s 4420 -i 4 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:20.081 11:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:22.605 11:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:22.605 11:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:22.605 11:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.605 11:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:22.605 [ 0]:0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9623a37df0c8475f98cf8e280f97a26d 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9623a37df0c8475f98cf8e280f97a26d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:22.605 [ 1]:0x2 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:22.605 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:22.606 [ 0]:0x2 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:22.606 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:22.862 [2024-07-15 11:36:30.748510] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:22.862 request: 00:10:22.862 { 00:10:22.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.862 "nsid": 2, 00:10:22.862 "host": "nqn.2016-06.io.spdk:host1", 00:10:22.862 "method": "nvmf_ns_remove_host", 00:10:22.862 "req_id": 1 00:10:22.862 } 00:10:22.862 Got JSON-RPC error response 00:10:22.862 response: 00:10:22.862 { 00:10:22.862 "code": -32602, 00:10:22.862 "message": "Invalid parameters" 00:10:22.862 } 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:22.862 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:23.119 [ 0]:0x2 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03134dae632469689c14ba6f8ad1e6a 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03134dae632469689c14ba6f8ad1e6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:23.119 11:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2966009 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2966009 /var/tmp/host.sock 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2966009 ']' 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.119 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:23.119 [2024-07-15 11:36:31.103563] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:23.119 [2024-07-15 11:36:31.103666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966009 ] 00:10:23.377 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.377 [2024-07-15 11:36:31.165749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.377 [2024-07-15 11:36:31.274432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.635 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.635 11:36:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:23.635 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.892 11:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:24.149 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c92abf7a-5a9f-4ed8-b779-0334c704af16 00:10:24.149 11:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:24.149 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C92ABF7A5A9F4ED8B7790334C704AF16 -i 00:10:24.406 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8eff108d-caa0-4cb4-af0e-2de10a04b846 00:10:24.406 11:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:24.406 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8EFF108DCAA04CB4AF0E2DE10A04B846 -i 00:10:24.662 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:24.919 11:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:25.177 11:36:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:25.177 11:36:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:25.740 nvme0n1 00:10:25.740 11:36:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:25.740 11:36:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:26.304 nvme1n2 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:26.304 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:26.561 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c92abf7a-5a9f-4ed8-b779-0334c704af16 == \c\9\2\a\b\f\7\a\-\5\a\9\f\-\4\e\d\8\-\b\7\7\9\-\0\3\3\4\c\7\0\4\a\f\1\6 ]] 00:10:26.561 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:26.561 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:26.561 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8eff108d-caa0-4cb4-af0e-2de10a04b846 == \8\e\f\f\1\0\8\d\-\c\a\a\0\-\4\c\b\4\-\a\f\0\e\-\2\d\e\1\0\a\0\4\b\8\4\6 ]] 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2966009 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2966009 ']' 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2966009 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.819 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2966009 00:10:27.076 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:27.076 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:27.076 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2966009' 00:10:27.076 killing process with pid 2966009 00:10:27.076 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2966009 00:10:27.076 11:36:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2966009 00:10:27.334 11:36:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.591 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.591 rmmod nvme_tcp 00:10:27.591 rmmod nvme_fabrics 00:10:27.591 rmmod nvme_keyring 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2964501 ']' 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2964501 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2964501 ']' 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2964501 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2964501 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2964501' 00:10:27.851 killing process with pid 2964501 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2964501 00:10:27.851 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2964501 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.127 11:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.041 11:36:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.041 00:10:30.041 real 0m21.193s 00:10:30.041 user 0m27.550s 00:10:30.041 sys 0m4.304s 00:10:30.041 11:36:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.041 11:36:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:30.041 ************************************ 00:10:30.041 END TEST nvmf_ns_masking 00:10:30.041 ************************************ 00:10:30.041 11:36:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:30.041 11:36:38 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:30.041 11:36:38 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:30.041 11:36:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.041 11:36:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.041 11:36:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 ************************************ 00:10:30.299 START TEST nvmf_nvme_cli 00:10:30.299 ************************************ 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:30.299 * Looking for test storage... 00:10:30.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.299 11:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.832 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:32.833 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:32.833 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:32.833 Found net devices under 0000:84:00.0: cvl_0_0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:32.833 Found net devices under 0000:84:00.1: cvl_0_1 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:10:32.833 00:10:32.833 --- 10.0.0.2 ping statistics --- 00:10:32.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.833 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:10:32.833 00:10:32.833 --- 10.0.0.1 ping statistics --- 00:10:32.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.833 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2968631 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2968631 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2968631 ']' 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 [2024-07-15 11:36:40.468645] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:32.833 [2024-07-15 11:36:40.468746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.833 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.833 [2024-07-15 11:36:40.537089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.833 [2024-07-15 11:36:40.644736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.833 [2024-07-15 11:36:40.644795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.833 [2024-07-15 11:36:40.644823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.833 [2024-07-15 11:36:40.644835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.833 [2024-07-15 11:36:40.644845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.833 [2024-07-15 11:36:40.644895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.833 [2024-07-15 11:36:40.644953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.833 [2024-07-15 11:36:40.644975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.833 [2024-07-15 11:36:40.644980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.833 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.834 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.834 [2024-07-15 11:36:40.788491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.834 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.834 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.834 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.834 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.091 Malloc0 00:10:33.091 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.091 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 Malloc1 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 [2024-07-15 11:36:40.874243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.092 11:36:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:10:33.092 00:10:33.092 Discovery Log Number of Records 2, Generation counter 2 00:10:33.092 =====Discovery Log Entry 0====== 00:10:33.092 trtype: tcp 00:10:33.092 adrfam: ipv4 00:10:33.092 subtype: current discovery subsystem 00:10:33.092 treq: not required 00:10:33.092 portid: 0 00:10:33.092 trsvcid: 4420 00:10:33.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:33.092 traddr: 10.0.0.2 00:10:33.092 eflags: explicit discovery connections, duplicate discovery information 00:10:33.092 sectype: none 00:10:33.092 =====Discovery Log Entry 1====== 00:10:33.092 trtype: tcp 00:10:33.092 adrfam: ipv4 00:10:33.092 subtype: nvme subsystem 00:10:33.092 treq: not required 00:10:33.092 portid: 0 00:10:33.092 trsvcid: 4420 00:10:33.092 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:33.092 traddr: 10.0.0.2 00:10:33.092 eflags: none 00:10:33.092 sectype: none 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:33.092 11:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:33.658 11:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:36.186 /dev/nvme0n1 ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.186 rmmod nvme_tcp 00:10:36.186 rmmod nvme_fabrics 00:10:36.186 rmmod nvme_keyring 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2968631 ']' 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2968631 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2968631 ']' 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2968631 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2968631 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2968631' 00:10:36.186 killing process with pid 2968631 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2968631 00:10:36.186 11:36:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2968631 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.186 11:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.723 11:36:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.723 00:10:38.723 real 0m8.178s 00:10:38.723 user 0m14.488s 00:10:38.723 sys 0m2.250s 00:10:38.723 11:36:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.723 11:36:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:38.723 ************************************ 00:10:38.723 END TEST nvmf_nvme_cli 00:10:38.723 ************************************ 00:10:38.723 11:36:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.723 11:36:46 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:38.723 11:36:46 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:38.723 11:36:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.723 11:36:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.723 11:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.723 ************************************ 00:10:38.723 START TEST nvmf_vfio_user 00:10:38.723 ************************************ 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:38.723 * Looking for test storage... 00:10:38.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2969438 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2969438' 00:10:38.723 Process pid: 2969438 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2969438 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2969438 ']' 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:38.723 [2024-07-15 11:36:46.391680] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:38.723 [2024-07-15 11:36:46.391796] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.723 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.723 [2024-07-15 11:36:46.455144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.723 [2024-07-15 11:36:46.566100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.723 [2024-07-15 11:36:46.566160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.723 [2024-07-15 11:36:46.566188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.723 [2024-07-15 11:36:46.566199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.723 [2024-07-15 11:36:46.566209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.723 [2024-07-15 11:36:46.566292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.723 [2024-07-15 11:36:46.566331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.723 [2024-07-15 11:36:46.566428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.723 [2024-07-15 11:36:46.566425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:38.723 11:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:40.096 11:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:40.353 Malloc1 00:10:40.353 11:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.610 11:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.867 11:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:41.124 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:41.124 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:41.124 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:41.382 Malloc2 00:10:41.382 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:41.640 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.898 11:36:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:42.156 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:42.156 [2024-07-15 11:36:50.064766] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:42.156 [2024-07-15 11:36:50.064811] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969870 ] 00:10:42.156 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.156 [2024-07-15 11:36:50.098127] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:42.156 [2024-07-15 11:36:50.110559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.156 [2024-07-15 11:36:50.110587] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb18a934000 00:10:42.156 [2024-07-15 11:36:50.111554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.112548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.113551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.114555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.115564] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.116567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.117573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.118581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.156 [2024-07-15 11:36:50.119586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.156 [2024-07-15 11:36:50.119611] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb18a929000 00:10:42.156 [2024-07-15 11:36:50.120762] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.156 [2024-07-15 11:36:50.132346] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:42.156 [2024-07-15 11:36:50.132384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:42.156 [2024-07-15 11:36:50.141733] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.156 [2024-07-15 11:36:50.141807] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:42.156 [2024-07-15 11:36:50.141928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:42.156 [2024-07-15 11:36:50.141965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:42.156 [2024-07-15 11:36:50.141976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:42.156 [2024-07-15 11:36:50.142705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:42.156 [2024-07-15 11:36:50.142728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:42.156 [2024-07-15 11:36:50.142746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:42.415 [2024-07-15 11:36:50.143711] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.415 [2024-07-15 11:36:50.143759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:42.416 [2024-07-15 11:36:50.143775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.144716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:42.416 [2024-07-15 11:36:50.144744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.145745] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:42.416 [2024-07-15 11:36:50.145769] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:42.416 [2024-07-15 11:36:50.145780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.145793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.145903] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:42.416 [2024-07-15 11:36:50.145911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.145920] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:42.416 [2024-07-15 11:36:50.146745] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:42.416 [2024-07-15 11:36:50.147749] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:42.416 [2024-07-15 11:36:50.148756] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.416 [2024-07-15 11:36:50.149748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:42.416 [2024-07-15 11:36:50.149863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:42.416 [2024-07-15 11:36:50.150766] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:42.416 [2024-07-15 11:36:50.150785] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:42.416 [2024-07-15 11:36:50.150794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.150820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:42.416 [2024-07-15 11:36:50.150835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.150867] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.416 [2024-07-15 11:36:50.150878] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.416 [2024-07-15 11:36:50.150901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.150946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.150967] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:42.416 [2024-07-15 11:36:50.150979] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:42.416 [2024-07-15 11:36:50.150987] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:42.416 [2024-07-15 11:36:50.150995] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:42.416 [2024-07-15 11:36:50.151003] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:42.416 [2024-07-15 11:36:50.151011] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:42.416 [2024-07-15 11:36:50.151019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.416 [2024-07-15 11:36:50.151116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.416 [2024-07-15 11:36:50.151131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.416 [2024-07-15 11:36:50.151143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.416 [2024-07-15 11:36:50.151151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151205] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:42.416 [2024-07-15 11:36:50.151213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151348] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:42.416 [2024-07-15 11:36:50.151356] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:42.416 [2024-07-15 11:36:50.151365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151398] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:42.416 [2024-07-15 11:36:50.151418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151446] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.416 [2024-07-15 11:36:50.151453] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.416 [2024-07-15 11:36:50.151463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151542] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.416 [2024-07-15 11:36:50.151549] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.416 [2024-07-15 11:36:50.151558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151644] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:42.416 [2024-07-15 11:36:50.151652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:42.416 [2024-07-15 11:36:50.151659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:42.416 [2024-07-15 11:36:50.151687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:42.416 [2024-07-15 11:36:50.151781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:42.416 [2024-07-15 11:36:50.151799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:42.417 [2024-07-15 11:36:50.151816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.417 [2024-07-15 11:36:50.151828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:42.417 [2024-07-15 11:36:50.151852] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:42.417 [2024-07-15 11:36:50.151863] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:42.417 [2024-07-15 11:36:50.151869] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:42.417 [2024-07-15 11:36:50.151879] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:42.417 [2024-07-15 11:36:50.151889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:42.417 [2024-07-15 11:36:50.151901] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:42.417 [2024-07-15 11:36:50.151910] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:42.417 [2024-07-15 11:36:50.151919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:42.417 [2024-07-15 11:36:50.151930] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:42.417 [2024-07-15 11:36:50.151938] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.417 [2024-07-15 11:36:50.151947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.417 [2024-07-15 11:36:50.151960] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:42.417 [2024-07-15 11:36:50.151968] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:42.417 [2024-07-15 11:36:50.151977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:42.417 [2024-07-15 11:36:50.151989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:42.417 [2024-07-15 11:36:50.152010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:42.417 [2024-07-15 11:36:50.152045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:42.417 [2024-07-15 11:36:50.152057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:42.417 ===================================================== 00:10:42.417 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:42.417 ===================================================== 00:10:42.417 Controller Capabilities/Features 00:10:42.417 ================================ 00:10:42.417 Vendor ID: 4e58 00:10:42.417 Subsystem Vendor ID: 4e58 00:10:42.417 Serial Number: SPDK1 00:10:42.417 Model Number: SPDK bdev Controller 00:10:42.417 Firmware Version: 24.09 00:10:42.417 Recommended Arb Burst: 6 00:10:42.417 IEEE OUI Identifier: 8d 6b 50 00:10:42.417 Multi-path I/O 00:10:42.417 May have multiple subsystem ports: Yes 00:10:42.417 May have multiple controllers: Yes 00:10:42.417 Associated with SR-IOV VF: No 00:10:42.417 Max Data Transfer Size: 131072 00:10:42.417 Max Number of Namespaces: 32 00:10:42.417 Max Number of I/O Queues: 127 00:10:42.417 NVMe Specification Version (VS): 1.3 00:10:42.417 NVMe Specification Version (Identify): 1.3 00:10:42.417 Maximum Queue Entries: 256 00:10:42.417 Contiguous Queues Required: Yes 00:10:42.417 Arbitration Mechanisms Supported 00:10:42.417 Weighted Round Robin: Not Supported 00:10:42.417 Vendor Specific: Not Supported 00:10:42.417 Reset Timeout: 15000 ms 00:10:42.417 Doorbell Stride: 4 bytes 00:10:42.417 NVM Subsystem Reset: Not Supported 00:10:42.417 Command Sets Supported 00:10:42.417 NVM Command Set: Supported 00:10:42.417 Boot Partition: Not Supported 00:10:42.417 Memory Page Size Minimum: 4096 bytes 00:10:42.417 Memory Page Size Maximum: 4096 bytes 00:10:42.417 Persistent Memory Region: Not Supported 00:10:42.417 Optional Asynchronous Events Supported 00:10:42.417 Namespace Attribute Notices: Supported 00:10:42.417 Firmware Activation Notices: Not Supported 00:10:42.417 ANA Change Notices: Not Supported 00:10:42.417 PLE Aggregate Log Change Notices: Not Supported 00:10:42.417 LBA Status Info Alert Notices: Not Supported 00:10:42.417 EGE Aggregate Log Change Notices: Not Supported 00:10:42.417 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.417 Zone Descriptor Change Notices: Not Supported 00:10:42.417 Discovery Log Change Notices: Not Supported 00:10:42.417 Controller Attributes 00:10:42.417 128-bit Host Identifier: Supported 00:10:42.417 Non-Operational Permissive Mode: Not Supported 00:10:42.417 NVM Sets: Not Supported 00:10:42.417 Read Recovery Levels: Not Supported 00:10:42.417 Endurance Groups: Not Supported 00:10:42.417 Predictable Latency Mode: Not Supported 00:10:42.417 Traffic Based Keep ALive: Not Supported 00:10:42.417 Namespace Granularity: Not Supported 00:10:42.417 SQ Associations: Not Supported 00:10:42.417 UUID List: Not Supported 00:10:42.417 Multi-Domain Subsystem: Not Supported 00:10:42.417 Fixed Capacity Management: Not Supported 00:10:42.417 Variable Capacity Management: Not Supported 00:10:42.417 Delete Endurance Group: Not Supported 00:10:42.417 Delete NVM Set: Not Supported 00:10:42.417 Extended LBA Formats Supported: Not Supported 00:10:42.417 Flexible Data Placement Supported: Not Supported 00:10:42.417 00:10:42.417 Controller Memory Buffer Support 00:10:42.417 ================================ 00:10:42.417 Supported: No 00:10:42.417 00:10:42.417 Persistent Memory Region Support 00:10:42.417 ================================ 00:10:42.417 Supported: No 00:10:42.417 00:10:42.417 Admin Command Set Attributes 00:10:42.417 ============================ 00:10:42.417 Security Send/Receive: Not Supported 00:10:42.417 Format NVM: Not Supported 00:10:42.417 Firmware Activate/Download: Not Supported 00:10:42.417 Namespace Management: Not Supported 00:10:42.417 Device Self-Test: Not Supported 00:10:42.417 Directives: Not Supported 00:10:42.417 NVMe-MI: Not Supported 00:10:42.417 Virtualization Management: Not Supported 00:10:42.417 Doorbell Buffer Config: Not Supported 00:10:42.417 Get LBA Status Capability: Not Supported 00:10:42.417 Command & Feature Lockdown Capability: Not Supported 00:10:42.417 Abort Command Limit: 4 00:10:42.417 Async Event Request Limit: 4 00:10:42.417 Number of Firmware Slots: N/A 00:10:42.417 Firmware Slot 1 Read-Only: N/A 00:10:42.417 Firmware Activation Without Reset: N/A 00:10:42.417 Multiple Update Detection Support: N/A 00:10:42.417 Firmware Update Granularity: No Information Provided 00:10:42.417 Per-Namespace SMART Log: No 00:10:42.417 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.417 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:42.417 Command Effects Log Page: Supported 00:10:42.417 Get Log Page Extended Data: Supported 00:10:42.417 Telemetry Log Pages: Not Supported 00:10:42.417 Persistent Event Log Pages: Not Supported 00:10:42.417 Supported Log Pages Log Page: May Support 00:10:42.417 Commands Supported & Effects Log Page: Not Supported 00:10:42.417 Feature Identifiers & Effects Log Page:May Support 00:10:42.417 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.417 Data Area 4 for Telemetry Log: Not Supported 00:10:42.417 Error Log Page Entries Supported: 128 00:10:42.417 Keep Alive: Supported 00:10:42.417 Keep Alive Granularity: 10000 ms 00:10:42.417 00:10:42.417 NVM Command Set Attributes 00:10:42.417 ========================== 00:10:42.417 Submission Queue Entry Size 00:10:42.417 Max: 64 00:10:42.417 Min: 64 00:10:42.417 Completion Queue Entry Size 00:10:42.417 Max: 16 00:10:42.417 Min: 16 00:10:42.417 Number of Namespaces: 32 00:10:42.417 Compare Command: Supported 00:10:42.417 Write Uncorrectable Command: Not Supported 00:10:42.417 Dataset Management Command: Supported 00:10:42.417 Write Zeroes Command: Supported 00:10:42.417 Set Features Save Field: Not Supported 00:10:42.417 Reservations: Not Supported 00:10:42.417 Timestamp: Not Supported 00:10:42.417 Copy: Supported 00:10:42.417 Volatile Write Cache: Present 00:10:42.417 Atomic Write Unit (Normal): 1 00:10:42.417 Atomic Write Unit (PFail): 1 00:10:42.417 Atomic Compare & Write Unit: 1 00:10:42.417 Fused Compare & Write: Supported 00:10:42.417 Scatter-Gather List 00:10:42.417 SGL Command Set: Supported (Dword aligned) 00:10:42.417 SGL Keyed: Not Supported 00:10:42.417 SGL Bit Bucket Descriptor: Not Supported 00:10:42.417 SGL Metadata Pointer: Not Supported 00:10:42.417 Oversized SGL: Not Supported 00:10:42.417 SGL Metadata Address: Not Supported 00:10:42.417 SGL Offset: Not Supported 00:10:42.417 Transport SGL Data Block: Not Supported 00:10:42.417 Replay Protected Memory Block: Not Supported 00:10:42.417 00:10:42.417 Firmware Slot Information 00:10:42.417 ========================= 00:10:42.417 Active slot: 1 00:10:42.417 Slot 1 Firmware Revision: 24.09 00:10:42.417 00:10:42.417 00:10:42.417 Commands Supported and Effects 00:10:42.417 ============================== 00:10:42.417 Admin Commands 00:10:42.417 -------------- 00:10:42.417 Get Log Page (02h): Supported 00:10:42.417 Identify (06h): Supported 00:10:42.417 Abort (08h): Supported 00:10:42.417 Set Features (09h): Supported 00:10:42.417 Get Features (0Ah): Supported 00:10:42.417 Asynchronous Event Request (0Ch): Supported 00:10:42.417 Keep Alive (18h): Supported 00:10:42.417 I/O Commands 00:10:42.417 ------------ 00:10:42.417 Flush (00h): Supported LBA-Change 00:10:42.417 Write (01h): Supported LBA-Change 00:10:42.417 Read (02h): Supported 00:10:42.417 Compare (05h): Supported 00:10:42.417 Write Zeroes (08h): Supported LBA-Change 00:10:42.417 Dataset Management (09h): Supported LBA-Change 00:10:42.417 Copy (19h): Supported LBA-Change 00:10:42.418 00:10:42.418 Error Log 00:10:42.418 ========= 00:10:42.418 00:10:42.418 Arbitration 00:10:42.418 =========== 00:10:42.418 Arbitration Burst: 1 00:10:42.418 00:10:42.418 Power Management 00:10:42.418 ================ 00:10:42.418 Number of Power States: 1 00:10:42.418 Current Power State: Power State #0 00:10:42.418 Power State #0: 00:10:42.418 Max Power: 0.00 W 00:10:42.418 Non-Operational State: Operational 00:10:42.418 Entry Latency: Not Reported 00:10:42.418 Exit Latency: Not Reported 00:10:42.418 Relative Read Throughput: 0 00:10:42.418 Relative Read Latency: 0 00:10:42.418 Relative Write Throughput: 0 00:10:42.418 Relative Write Latency: 0 00:10:42.418 Idle Power: Not Reported 00:10:42.418 Active Power: Not Reported 00:10:42.418 Non-Operational Permissive Mode: Not Supported 00:10:42.418 00:10:42.418 Health Information 00:10:42.418 ================== 00:10:42.418 Critical Warnings: 00:10:42.418 Available Spare Space: OK 00:10:42.418 Temperature: OK 00:10:42.418 Device Reliability: OK 00:10:42.418 Read Only: No 00:10:42.418 Volatile Memory Backup: OK 00:10:42.418 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:42.418 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:42.418 Available Spare: 0% 00:10:42.418 Available Sp[2024-07-15 11:36:50.152183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:42.418 [2024-07-15 11:36:50.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:42.418 [2024-07-15 11:36:50.152249] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:42.418 [2024-07-15 11:36:50.152266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.418 [2024-07-15 11:36:50.152277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.418 [2024-07-15 11:36:50.152287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.418 [2024-07-15 11:36:50.152297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.418 [2024-07-15 11:36:50.152788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.418 [2024-07-15 11:36:50.152812] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:42.418 [2024-07-15 11:36:50.153802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:42.418 [2024-07-15 11:36:50.153884] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:42.418 [2024-07-15 11:36:50.153899] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:42.418 [2024-07-15 11:36:50.154796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:42.418 [2024-07-15 11:36:50.154821] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:42.418 [2024-07-15 11:36:50.154882] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:42.418 [2024-07-15 11:36:50.157749] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.418 are Threshold: 0% 00:10:42.418 Life Percentage Used: 0% 00:10:42.418 Data Units Read: 0 00:10:42.418 Data Units Written: 0 00:10:42.418 Host Read Commands: 0 00:10:42.418 Host Write Commands: 0 00:10:42.418 Controller Busy Time: 0 minutes 00:10:42.418 Power Cycles: 0 00:10:42.418 Power On Hours: 0 hours 00:10:42.418 Unsafe Shutdowns: 0 00:10:42.418 Unrecoverable Media Errors: 0 00:10:42.418 Lifetime Error Log Entries: 0 00:10:42.418 Warning Temperature Time: 0 minutes 00:10:42.418 Critical Temperature Time: 0 minutes 00:10:42.418 00:10:42.418 Number of Queues 00:10:42.418 ================ 00:10:42.418 Number of I/O Submission Queues: 127 00:10:42.418 Number of I/O Completion Queues: 127 00:10:42.418 00:10:42.418 Active Namespaces 00:10:42.418 ================= 00:10:42.418 Namespace ID:1 00:10:42.418 Error Recovery Timeout: Unlimited 00:10:42.418 Command Set Identifier: NVM (00h) 00:10:42.418 Deallocate: Supported 00:10:42.418 Deallocated/Unwritten Error: Not Supported 00:10:42.418 Deallocated Read Value: Unknown 00:10:42.418 Deallocate in Write Zeroes: Not Supported 00:10:42.418 Deallocated Guard Field: 0xFFFF 00:10:42.418 Flush: Supported 00:10:42.418 Reservation: Supported 00:10:42.418 Namespace Sharing Capabilities: Multiple Controllers 00:10:42.418 Size (in LBAs): 131072 (0GiB) 00:10:42.418 Capacity (in LBAs): 131072 (0GiB) 00:10:42.418 Utilization (in LBAs): 131072 (0GiB) 00:10:42.418 NGUID: 40486BA397804021B1B048BCCA7DBD8A 00:10:42.418 UUID: 40486ba3-9780-4021-b1b0-48bcca7dbd8a 00:10:42.418 Thin Provisioning: Not Supported 00:10:42.418 Per-NS Atomic Units: Yes 00:10:42.418 Atomic Boundary Size (Normal): 0 00:10:42.418 Atomic Boundary Size (PFail): 0 00:10:42.418 Atomic Boundary Offset: 0 00:10:42.418 Maximum Single Source Range Length: 65535 00:10:42.418 Maximum Copy Length: 65535 00:10:42.418 Maximum Source Range Count: 1 00:10:42.418 NGUID/EUI64 Never Reused: No 00:10:42.418 Namespace Write Protected: No 00:10:42.418 Number of LBA Formats: 1 00:10:42.418 Current LBA Format: LBA Format #00 00:10:42.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.418 00:10:42.418 11:36:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:42.418 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.418 [2024-07-15 11:36:50.385544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:47.717 Initializing NVMe Controllers 00:10:47.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:47.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:47.717 Initialization complete. Launching workers. 00:10:47.717 ======================================================== 00:10:47.717 Latency(us) 00:10:47.717 Device Information : IOPS MiB/s Average min max 00:10:47.717 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34721.41 135.63 3685.79 1152.24 9825.51 00:10:47.717 ======================================================== 00:10:47.717 Total : 34721.41 135.63 3685.79 1152.24 9825.51 00:10:47.717 00:10:47.717 [2024-07-15 11:36:55.405816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:47.717 11:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:47.717 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.717 [2024-07-15 11:36:55.649974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:52.977 Initializing NVMe Controllers 00:10:52.977 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:52.977 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:52.977 Initialization complete. Launching workers. 00:10:52.977 ======================================================== 00:10:52.977 Latency(us) 00:10:52.977 Device Information : IOPS MiB/s Average min max 00:10:52.977 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15946.21 62.29 8032.25 5967.64 15863.60 00:10:52.977 ======================================================== 00:10:52.977 Total : 15946.21 62.29 8032.25 5967.64 15863.60 00:10:52.977 00:10:52.977 [2024-07-15 11:37:00.689644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:52.977 11:37:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:52.978 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.978 [2024-07-15 11:37:00.905708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:58.238 [2024-07-15 11:37:05.968043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:58.238 Initializing NVMe Controllers 00:10:58.238 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:58.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:58.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:58.238 Initialization complete. Launching workers. 00:10:58.238 Starting thread on core 2 00:10:58.238 Starting thread on core 3 00:10:58.238 Starting thread on core 1 00:10:58.238 11:37:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:58.238 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.495 [2024-07-15 11:37:06.280235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.833 [2024-07-15 11:37:09.352407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.833 Initializing NVMe Controllers 00:11:01.833 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.833 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:01.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:01.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:01.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:01.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:01.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:01.833 Initialization complete. Launching workers. 00:11:01.833 Starting thread on core 1 with urgent priority queue 00:11:01.833 Starting thread on core 2 with urgent priority queue 00:11:01.833 Starting thread on core 3 with urgent priority queue 00:11:01.833 Starting thread on core 0 with urgent priority queue 00:11:01.833 SPDK bdev Controller (SPDK1 ) core 0: 5153.00 IO/s 19.41 secs/100000 ios 00:11:01.833 SPDK bdev Controller (SPDK1 ) core 1: 5648.00 IO/s 17.71 secs/100000 ios 00:11:01.833 SPDK bdev Controller (SPDK1 ) core 2: 5749.00 IO/s 17.39 secs/100000 ios 00:11:01.833 SPDK bdev Controller (SPDK1 ) core 3: 5180.00 IO/s 19.31 secs/100000 ios 00:11:01.833 ======================================================== 00:11:01.833 00:11:01.833 11:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:01.833 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.833 [2024-07-15 11:37:09.652971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.833 Initializing NVMe Controllers 00:11:01.833 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.833 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.833 Namespace ID: 1 size: 0GB 00:11:01.833 Initialization complete. 00:11:01.833 INFO: using host memory buffer for IO 00:11:01.833 Hello world! 00:11:01.833 [2024-07-15 11:37:09.688487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.833 11:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:01.833 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.106 [2024-07-15 11:37:09.976338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.036 Initializing NVMe Controllers 00:11:03.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.036 Initialization complete. Launching workers. 00:11:03.036 submit (in ns) avg, min, max = 7831.3, 3498.9, 4013973.3 00:11:03.036 complete (in ns) avg, min, max = 27197.7, 2065.6, 8004347.8 00:11:03.036 00:11:03.036 Submit histogram 00:11:03.036 ================ 00:11:03.036 Range in us Cumulative Count 00:11:03.036 3.484 - 3.508: 0.0300% ( 4) 00:11:03.036 3.508 - 3.532: 0.2699% ( 32) 00:11:03.036 3.532 - 3.556: 0.9597% ( 92) 00:11:03.036 3.556 - 3.579: 2.9765% ( 269) 00:11:03.036 3.579 - 3.603: 6.9801% ( 534) 00:11:03.036 3.603 - 3.627: 14.2600% ( 971) 00:11:03.036 3.627 - 3.650: 22.8820% ( 1150) 00:11:03.036 3.650 - 3.674: 32.4336% ( 1274) 00:11:03.036 3.674 - 3.698: 40.1335% ( 1027) 00:11:03.036 3.698 - 3.721: 47.1960% ( 942) 00:11:03.036 3.721 - 3.745: 52.4291% ( 698) 00:11:03.036 3.745 - 3.769: 57.1375% ( 628) 00:11:03.036 3.769 - 3.793: 61.2236% ( 545) 00:11:03.036 3.793 - 3.816: 65.0322% ( 508) 00:11:03.036 3.816 - 3.840: 68.3386% ( 441) 00:11:03.036 3.840 - 3.864: 72.1622% ( 510) 00:11:03.036 3.864 - 3.887: 75.8510% ( 492) 00:11:03.036 3.887 - 3.911: 79.6446% ( 506) 00:11:03.036 3.911 - 3.935: 82.9285% ( 438) 00:11:03.036 3.935 - 3.959: 85.2976% ( 316) 00:11:03.036 3.959 - 3.982: 87.5619% ( 302) 00:11:03.036 3.982 - 4.006: 89.3162% ( 234) 00:11:03.036 4.006 - 4.030: 90.7482% ( 191) 00:11:03.036 4.030 - 4.053: 91.9853% ( 165) 00:11:03.036 4.053 - 4.077: 92.9000% ( 122) 00:11:03.036 4.077 - 4.101: 93.7397% ( 112) 00:11:03.036 4.101 - 4.124: 94.4519% ( 95) 00:11:03.036 4.124 - 4.148: 94.9393% ( 65) 00:11:03.036 4.148 - 4.172: 95.2842% ( 46) 00:11:03.036 4.172 - 4.196: 95.4566% ( 23) 00:11:03.036 4.196 - 4.219: 95.7040% ( 33) 00:11:03.036 4.219 - 4.243: 95.9139% ( 28) 00:11:03.036 4.243 - 4.267: 96.0189% ( 14) 00:11:03.036 4.267 - 4.290: 96.1763% ( 21) 00:11:03.036 4.290 - 4.314: 96.2588% ( 11) 00:11:03.036 4.314 - 4.338: 96.4088% ( 20) 00:11:03.036 4.338 - 4.361: 96.4762% ( 9) 00:11:03.036 4.361 - 4.385: 96.5287% ( 7) 00:11:03.036 4.385 - 4.409: 96.5962% ( 9) 00:11:03.036 4.409 - 4.433: 96.6187% ( 3) 00:11:03.037 4.433 - 4.456: 96.6412% ( 3) 00:11:03.037 4.456 - 4.480: 96.6862% ( 6) 00:11:03.037 4.480 - 4.504: 96.7761% ( 12) 00:11:03.037 4.504 - 4.527: 96.8361% ( 8) 00:11:03.037 4.527 - 4.551: 96.9261% ( 12) 00:11:03.037 4.551 - 4.575: 97.0385% ( 15) 00:11:03.037 4.575 - 4.599: 97.1285% ( 12) 00:11:03.037 4.599 - 4.622: 97.2035% ( 10) 00:11:03.037 4.622 - 4.646: 97.2410% ( 5) 00:11:03.037 4.646 - 4.670: 97.3234% ( 11) 00:11:03.037 4.670 - 4.693: 97.3759% ( 7) 00:11:03.037 4.693 - 4.717: 97.4434% ( 9) 00:11:03.037 4.717 - 4.741: 97.4959% ( 7) 00:11:03.037 4.741 - 4.764: 97.6008% ( 14) 00:11:03.037 4.764 - 4.788: 97.6533% ( 7) 00:11:03.037 4.788 - 4.812: 97.7058% ( 7) 00:11:03.037 4.812 - 4.836: 97.8183% ( 15) 00:11:03.037 4.836 - 4.859: 97.8782% ( 8) 00:11:03.037 4.859 - 4.883: 97.9232% ( 6) 00:11:03.037 4.883 - 4.907: 97.9832% ( 8) 00:11:03.037 4.907 - 4.930: 98.0132% ( 4) 00:11:03.037 4.930 - 4.954: 98.0957% ( 11) 00:11:03.037 4.954 - 4.978: 98.1407% ( 6) 00:11:03.037 4.978 - 5.001: 98.1556% ( 2) 00:11:03.037 5.001 - 5.025: 98.1856% ( 4) 00:11:03.037 5.025 - 5.049: 98.2156% ( 4) 00:11:03.037 5.049 - 5.073: 98.2381% ( 3) 00:11:03.037 5.073 - 5.096: 98.2606% ( 3) 00:11:03.037 5.096 - 5.120: 98.2681% ( 1) 00:11:03.037 5.120 - 5.144: 98.2831% ( 2) 00:11:03.037 5.144 - 5.167: 98.3206% ( 5) 00:11:03.037 5.167 - 5.191: 98.3281% ( 1) 00:11:03.037 5.191 - 5.215: 98.3431% ( 2) 00:11:03.037 5.215 - 5.239: 98.3581% ( 2) 00:11:03.037 5.262 - 5.286: 98.3806% ( 3) 00:11:03.037 5.286 - 5.310: 98.3956% ( 2) 00:11:03.037 5.310 - 5.333: 98.4256% ( 4) 00:11:03.037 5.333 - 5.357: 98.4330% ( 1) 00:11:03.037 5.357 - 5.381: 98.4480% ( 2) 00:11:03.037 5.452 - 5.476: 98.4630% ( 2) 00:11:03.037 5.476 - 5.499: 98.4855% ( 3) 00:11:03.037 5.499 - 5.523: 98.5005% ( 2) 00:11:03.037 5.570 - 5.594: 98.5155% ( 2) 00:11:03.037 5.594 - 5.618: 98.5230% ( 1) 00:11:03.037 5.641 - 5.665: 98.5380% ( 2) 00:11:03.037 5.665 - 5.689: 98.5455% ( 1) 00:11:03.037 5.689 - 5.713: 98.5530% ( 1) 00:11:03.037 5.736 - 5.760: 98.5605% ( 1) 00:11:03.037 5.784 - 5.807: 98.5755% ( 2) 00:11:03.037 5.807 - 5.831: 98.5830% ( 1) 00:11:03.037 5.879 - 5.902: 98.5905% ( 1) 00:11:03.037 5.950 - 5.973: 98.5980% ( 1) 00:11:03.037 5.973 - 5.997: 98.6055% ( 1) 00:11:03.037 5.997 - 6.021: 98.6130% ( 1) 00:11:03.037 6.068 - 6.116: 98.6205% ( 1) 00:11:03.037 6.637 - 6.684: 98.6280% ( 1) 00:11:03.037 6.779 - 6.827: 98.6355% ( 1) 00:11:03.037 7.016 - 7.064: 98.6655% ( 4) 00:11:03.037 7.064 - 7.111: 98.6730% ( 1) 00:11:03.037 7.159 - 7.206: 98.6805% ( 1) 00:11:03.037 7.206 - 7.253: 98.6880% ( 1) 00:11:03.037 7.253 - 7.301: 98.7030% ( 2) 00:11:03.037 7.301 - 7.348: 98.7105% ( 1) 00:11:03.037 7.443 - 7.490: 98.7179% ( 1) 00:11:03.037 7.633 - 7.680: 98.7254% ( 1) 00:11:03.037 7.680 - 7.727: 98.7404% ( 2) 00:11:03.037 7.727 - 7.775: 98.7479% ( 1) 00:11:03.037 7.775 - 7.822: 98.7554% ( 1) 00:11:03.037 7.822 - 7.870: 98.7704% ( 2) 00:11:03.037 7.870 - 7.917: 98.7779% ( 1) 00:11:03.037 7.964 - 8.012: 98.7854% ( 1) 00:11:03.037 8.344 - 8.391: 98.7929% ( 1) 00:11:03.037 8.439 - 8.486: 98.8079% ( 2) 00:11:03.037 8.533 - 8.581: 98.8154% ( 1) 00:11:03.037 8.581 - 8.628: 98.8229% ( 1) 00:11:03.037 8.628 - 8.676: 98.8379% ( 2) 00:11:03.037 8.723 - 8.770: 98.8454% ( 1) 00:11:03.037 8.770 - 8.818: 98.8529% ( 1) 00:11:03.037 8.865 - 8.913: 98.8604% ( 1) 00:11:03.037 8.913 - 8.960: 98.8679% ( 1) 00:11:03.037 8.960 - 9.007: 98.8904% ( 3) 00:11:03.037 9.007 - 9.055: 98.8979% ( 1) 00:11:03.037 9.102 - 9.150: 98.9129% ( 2) 00:11:03.037 9.292 - 9.339: 98.9279% ( 2) 00:11:03.037 9.339 - 9.387: 98.9354% ( 1) 00:11:03.037 9.387 - 9.434: 98.9429% ( 1) 00:11:03.037 9.529 - 9.576: 98.9504% ( 1) 00:11:03.037 9.671 - 9.719: 98.9579% ( 1) 00:11:03.037 9.861 - 9.908: 98.9654% ( 1) 00:11:03.037 10.193 - 10.240: 98.9729% ( 1) 00:11:03.037 10.382 - 10.430: 98.9804% ( 1) 00:11:03.037 10.430 - 10.477: 98.9879% ( 1) 00:11:03.037 10.619 - 10.667: 98.9954% ( 1) 00:11:03.037 10.714 - 10.761: 99.0028% ( 1) 00:11:03.037 11.330 - 11.378: 99.0103% ( 1) 00:11:03.037 11.567 - 11.615: 99.0178% ( 1) 00:11:03.037 11.710 - 11.757: 99.0253% ( 1) 00:11:03.037 11.852 - 11.899: 99.0328% ( 1) 00:11:03.037 12.231 - 12.326: 99.0403% ( 1) 00:11:03.037 12.516 - 12.610: 99.0478% ( 1) 00:11:03.037 13.179 - 13.274: 99.0553% ( 1) 00:11:03.037 13.274 - 13.369: 99.0628% ( 1) 00:11:03.037 13.464 - 13.559: 99.0703% ( 1) 00:11:03.037 13.938 - 14.033: 99.0778% ( 1) 00:11:03.037 14.127 - 14.222: 99.0928% ( 2) 00:11:03.037 14.412 - 14.507: 99.1003% ( 1) 00:11:03.037 14.507 - 14.601: 99.1153% ( 2) 00:11:03.037 15.739 - 15.834: 99.1228% ( 1) 00:11:03.037 17.161 - 17.256: 99.1303% ( 1) 00:11:03.037 17.256 - 17.351: 99.1378% ( 1) 00:11:03.037 17.446 - 17.541: 99.1453% ( 1) 00:11:03.037 17.541 - 17.636: 99.1828% ( 5) 00:11:03.037 17.636 - 17.730: 99.2053% ( 3) 00:11:03.037 17.730 - 17.825: 99.2353% ( 4) 00:11:03.037 17.825 - 17.920: 99.2952% ( 8) 00:11:03.037 17.920 - 18.015: 99.3477% ( 7) 00:11:03.037 18.015 - 18.110: 99.4077% ( 8) 00:11:03.037 18.110 - 18.204: 99.4377% ( 4) 00:11:03.037 18.204 - 18.299: 99.5127% ( 10) 00:11:03.037 18.299 - 18.394: 99.5502% ( 5) 00:11:03.037 18.394 - 18.489: 99.6551% ( 14) 00:11:03.037 18.489 - 18.584: 99.6926% ( 5) 00:11:03.037 18.584 - 18.679: 99.7526% ( 8) 00:11:03.037 18.679 - 18.773: 99.7601% ( 1) 00:11:03.037 18.773 - 18.868: 99.7676% ( 1) 00:11:03.037 18.868 - 18.963: 99.7826% ( 2) 00:11:03.037 18.963 - 19.058: 99.8201% ( 5) 00:11:03.037 19.058 - 19.153: 99.8276% ( 1) 00:11:03.037 19.153 - 19.247: 99.8351% ( 1) 00:11:03.037 19.247 - 19.342: 99.8501% ( 2) 00:11:03.037 19.437 - 19.532: 99.8575% ( 1) 00:11:03.037 19.532 - 19.627: 99.8650% ( 1) 00:11:03.037 20.290 - 20.385: 99.8725% ( 1) 00:11:03.037 20.385 - 20.480: 99.8800% ( 1) 00:11:03.037 21.902 - 21.997: 99.8875% ( 1) 00:11:03.037 24.273 - 24.462: 99.8950% ( 1) 00:11:03.037 27.496 - 27.686: 99.9025% ( 1) 00:11:03.037 3980.705 - 4004.978: 99.9775% ( 10) 00:11:03.037 4004.978 - 4029.250: 100.0000% ( 3) 00:11:03.037 00:11:03.037 Complete histogram 00:11:03.037 ================== 00:11:03.037 Range in us Cumulative Count 00:11:03.037 2.062 - 2.074: 4.1236% ( 550) 00:11:03.037 2.074 - 2.086: 37.2620% ( 4420) 00:11:03.037 2.086 - 2.098: 42.2402% ( 664) 00:11:03.037 2.098 - 2.110: 48.6280% ( 852) 00:11:03.037 2.110 - 2.121: 56.6952% ( 1076) 00:11:03.037 2.121 - 2.133: 58.3596% ( 222) 00:11:03.037 2.133 - 2.145: 65.1447% ( 905) 00:11:03.037 2.145 - 2.157: 75.1087% ( 1329) 00:11:03.037 2.157 - 2.169: 76.2033% ( 146) 00:11:03.037 2.169 - 2.181: 79.8021% ( 480) 00:11:03.037 2.181 - 2.193: 82.9135% ( 415) 00:11:03.037 2.193 - 2.204: 83.6782% ( 102) 00:11:03.037 2.204 - 2.216: 85.6200% ( 259) 00:11:03.037 2.216 - 2.228: 88.7015% ( 411) 00:11:03.037 2.228 - 2.240: 90.5533% ( 247) 00:11:03.037 2.240 - 2.252: 92.1053% ( 207) 00:11:03.037 2.252 - 2.264: 93.1549% ( 140) 00:11:03.037 2.264 - 2.276: 93.3948% ( 32) 00:11:03.037 2.276 - 2.287: 93.6647% ( 36) 00:11:03.037 2.287 - 2.299: 93.9346% ( 36) 00:11:03.037 2.299 - 2.311: 94.3395% ( 54) 00:11:03.037 2.311 - 2.323: 94.6394% ( 40) 00:11:03.037 2.323 - 2.335: 94.6919% ( 7) 00:11:03.037 2.335 - 2.347: 94.8118% ( 16) 00:11:03.037 2.347 - 2.359: 95.0367% ( 30) 00:11:03.037 2.359 - 2.370: 95.2467% ( 28) 00:11:03.037 2.370 - 2.382: 95.6665% ( 56) 00:11:03.037 2.382 - 2.394: 96.0639% ( 53) 00:11:03.037 2.394 - 2.406: 96.3638% ( 40) 00:11:03.037 2.406 - 2.418: 96.4612% ( 13) 00:11:03.037 2.418 - 2.430: 96.6112% ( 20) 00:11:03.037 2.430 - 2.441: 96.7236% ( 15) 00:11:03.037 2.441 - 2.453: 96.8061% ( 11) 00:11:03.037 2.453 - 2.465: 96.8886% ( 11) 00:11:03.037 2.465 - 2.477: 96.9861% ( 13) 00:11:03.037 2.477 - 2.489: 97.0535% ( 9) 00:11:03.037 2.489 - 2.501: 97.1135% ( 8) 00:11:03.037 2.501 - 2.513: 97.1585% ( 6) 00:11:03.037 2.513 - 2.524: 97.1885% ( 4) 00:11:03.038 2.524 - 2.536: 97.2035% ( 2) 00:11:03.038 2.536 - 2.548: 97.2335% ( 4) 00:11:03.038 2.548 - 2.560: 97.2560% ( 3) 00:11:03.038 2.560 - 2.572: 97.2785% ( 3) 00:11:03.038 2.572 - 2.584: 97.3234% ( 6) 00:11:03.038 2.584 - 2.596: 97.3984% ( 10) 00:11:03.038 2.596 - 2.607: 97.5184% ( 16) 00:11:03.038 2.607 - 2.619: 97.5933% ( 10) 00:11:03.038 2.619 - 2.631: 97.6308% ( 5) 00:11:03.038 2.631 - 2.643: 97.7058% ( 10) 00:11:03.038 2.643 - 2.655: 97.7958% ( 12) 00:11:03.038 2.655 - 2.667: 97.8258% ( 4) 00:11:03.038 2.667 - 2.679: 97.8782% ( 7) 00:11:03.038 2.679 - 2.690: 97.9457% ( 9) 00:11:03.038 2.690 - 2.702: 97.9907% ( 6) 00:11:03.038 2.702 - 2.714: 97.9982% ( 1) 00:11:03.038 2.714 - 2.726: 98.0432% ( 6) 00:11:03.038 2.726 - 2.738: 98.0732% ( 4) 00:11:03.038 2.738 - 2.750: 98.0807% ( 1) 00:11:03.038 2.761 - 2.773: 98.0957% ( 2) 00:11:03.038 2.773 - 2.785: 98.1107% ( 2) 00:11:03.038 2.785 - 2.797: 98.1182% ( 1) 00:11:03.038 2.797 - 2.809: 98.1332% ( 2) 00:11:03.038 2.809 - 2.821: 98.1407% ( 1) 00:11:03.038 2.821 - 2.833: 98.1481% ( 1) 00:11:03.038 2.833 - 2.844: 98.1631% ( 2) 00:11:03.038 2.844 - 2.856: 98.1706% ( 1) 00:11:03.038 2.868 - 2.880: 98.1931% ( 3) 00:11:03.038 2.880 - 2.892: 98.2006% ( 1) 00:11:03.038 2.892 - 2.904: 98.2231% ( 3) 00:11:03.038 2.904 - 2.916: 98.2381% ( 2) 00:11:03.038 2.916 - 2.927: 98.2531% ( 2) 00:11:03.038 2.939 - 2.951: 98.2606% ( 1) 00:11:03.038 2.951 - 2.963: 9[2024-07-15 11:37:11.000460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.296 8.2906% ( 4) 00:11:03.296 2.975 - 2.987: 98.3056% ( 2) 00:11:03.296 2.987 - 2.999: 98.3431% ( 5) 00:11:03.296 2.999 - 3.010: 98.3581% ( 2) 00:11:03.296 3.010 - 3.022: 98.3731% ( 2) 00:11:03.296 3.081 - 3.105: 98.3806% ( 1) 00:11:03.296 3.105 - 3.129: 98.3881% ( 1) 00:11:03.296 3.129 - 3.153: 98.4106% ( 3) 00:11:03.296 3.153 - 3.176: 98.4181% ( 1) 00:11:03.296 3.200 - 3.224: 98.4256% ( 1) 00:11:03.296 3.224 - 3.247: 98.4555% ( 4) 00:11:03.296 3.295 - 3.319: 98.4630% ( 1) 00:11:03.296 3.319 - 3.342: 98.4705% ( 1) 00:11:03.296 3.342 - 3.366: 98.4780% ( 1) 00:11:03.296 3.366 - 3.390: 98.5005% ( 3) 00:11:03.296 3.390 - 3.413: 98.5080% ( 1) 00:11:03.296 3.437 - 3.461: 98.5230% ( 2) 00:11:03.296 3.461 - 3.484: 98.5455% ( 3) 00:11:03.296 3.508 - 3.532: 98.5530% ( 1) 00:11:03.296 3.532 - 3.556: 98.5605% ( 1) 00:11:03.296 3.579 - 3.603: 98.5680% ( 1) 00:11:03.296 3.603 - 3.627: 98.5755% ( 1) 00:11:03.296 3.627 - 3.650: 98.5905% ( 2) 00:11:03.296 3.674 - 3.698: 98.5980% ( 1) 00:11:03.296 3.698 - 3.721: 98.6055% ( 1) 00:11:03.296 3.745 - 3.769: 98.6205% ( 2) 00:11:03.296 3.793 - 3.816: 98.6280% ( 1) 00:11:03.296 3.887 - 3.911: 98.6355% ( 1) 00:11:03.296 3.959 - 3.982: 98.6430% ( 1) 00:11:03.296 4.053 - 4.077: 98.6505% ( 1) 00:11:03.296 4.788 - 4.812: 98.6580% ( 1) 00:11:03.296 5.096 - 5.120: 98.6655% ( 1) 00:11:03.296 5.120 - 5.144: 98.6730% ( 1) 00:11:03.296 5.191 - 5.215: 98.6880% ( 2) 00:11:03.296 5.476 - 5.499: 98.6955% ( 1) 00:11:03.296 5.594 - 5.618: 98.7105% ( 2) 00:11:03.296 5.618 - 5.641: 98.7254% ( 2) 00:11:03.296 5.641 - 5.665: 98.7329% ( 1) 00:11:03.296 5.689 - 5.713: 98.7404% ( 1) 00:11:03.296 5.736 - 5.760: 98.7479% ( 1) 00:11:03.296 5.760 - 5.784: 98.7554% ( 1) 00:11:03.296 6.044 - 6.068: 98.7629% ( 1) 00:11:03.296 6.163 - 6.210: 98.7704% ( 1) 00:11:03.296 6.305 - 6.353: 98.7854% ( 2) 00:11:03.296 6.400 - 6.447: 98.8004% ( 2) 00:11:03.296 6.447 - 6.495: 98.8079% ( 1) 00:11:03.296 6.779 - 6.827: 98.8154% ( 1) 00:11:03.296 6.874 - 6.921: 98.8229% ( 1) 00:11:03.296 6.921 - 6.969: 98.8304% ( 1) 00:11:03.296 7.111 - 7.159: 98.8379% ( 1) 00:11:03.297 7.301 - 7.348: 98.8529% ( 2) 00:11:03.297 7.633 - 7.680: 98.8604% ( 1) 00:11:03.297 8.249 - 8.296: 98.8679% ( 1) 00:11:03.297 8.296 - 8.344: 98.8754% ( 1) 00:11:03.297 15.644 - 15.739: 98.8904% ( 2) 00:11:03.297 15.739 - 15.834: 98.9054% ( 2) 00:11:03.297 15.834 - 15.929: 98.9129% ( 1) 00:11:03.297 15.929 - 16.024: 98.9579% ( 6) 00:11:03.297 16.024 - 16.119: 99.0103% ( 7) 00:11:03.297 16.119 - 16.213: 99.0403% ( 4) 00:11:03.297 16.213 - 16.308: 99.0778% ( 5) 00:11:03.297 16.308 - 16.403: 99.1078% ( 4) 00:11:03.297 16.403 - 16.498: 99.1303% ( 3) 00:11:03.297 16.593 - 16.687: 99.1753% ( 6) 00:11:03.297 16.687 - 16.782: 99.2278% ( 7) 00:11:03.297 16.782 - 16.877: 99.2578% ( 4) 00:11:03.297 16.877 - 16.972: 99.2877% ( 4) 00:11:03.297 16.972 - 17.067: 99.3027% ( 2) 00:11:03.297 17.067 - 17.161: 99.3327% ( 4) 00:11:03.297 17.256 - 17.351: 99.3402% ( 1) 00:11:03.297 17.351 - 17.446: 99.3552% ( 2) 00:11:03.297 17.446 - 17.541: 99.3702% ( 2) 00:11:03.297 17.920 - 18.015: 99.3777% ( 1) 00:11:03.297 19.627 - 19.721: 99.3852% ( 1) 00:11:03.297 3009.801 - 3021.938: 99.3927% ( 1) 00:11:03.297 3034.074 - 3046.210: 99.4077% ( 2) 00:11:03.297 3046.210 - 3058.347: 99.4152% ( 1) 00:11:03.297 3980.705 - 4004.978: 99.9025% ( 65) 00:11:03.297 4004.978 - 4029.250: 99.9775% ( 10) 00:11:03.297 5000.154 - 5024.427: 99.9850% ( 1) 00:11:03.297 7961.410 - 8009.956: 100.0000% ( 2) 00:11:03.297 00:11:03.297 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:03.297 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:03.297 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:03.297 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:03.297 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:03.297 [ 00:11:03.297 { 00:11:03.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.297 "subtype": "Discovery", 00:11:03.297 "listen_addresses": [], 00:11:03.297 "allow_any_host": true, 00:11:03.297 "hosts": [] 00:11:03.297 }, 00:11:03.297 { 00:11:03.297 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:03.297 "subtype": "NVMe", 00:11:03.297 "listen_addresses": [ 00:11:03.297 { 00:11:03.297 "trtype": "VFIOUSER", 00:11:03.297 "adrfam": "IPv4", 00:11:03.297 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:03.297 "trsvcid": "0" 00:11:03.297 } 00:11:03.297 ], 00:11:03.297 "allow_any_host": true, 00:11:03.297 "hosts": [], 00:11:03.297 "serial_number": "SPDK1", 00:11:03.297 "model_number": "SPDK bdev Controller", 00:11:03.297 "max_namespaces": 32, 00:11:03.297 "min_cntlid": 1, 00:11:03.297 "max_cntlid": 65519, 00:11:03.297 "namespaces": [ 00:11:03.297 { 00:11:03.297 "nsid": 1, 00:11:03.297 "bdev_name": "Malloc1", 00:11:03.297 "name": "Malloc1", 00:11:03.297 "nguid": "40486BA397804021B1B048BCCA7DBD8A", 00:11:03.297 "uuid": "40486ba3-9780-4021-b1b0-48bcca7dbd8a" 00:11:03.297 } 00:11:03.297 ] 00:11:03.297 }, 00:11:03.297 { 00:11:03.297 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:03.297 "subtype": "NVMe", 00:11:03.297 "listen_addresses": [ 00:11:03.297 { 00:11:03.297 "trtype": "VFIOUSER", 00:11:03.297 "adrfam": "IPv4", 00:11:03.297 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:03.297 "trsvcid": "0" 00:11:03.297 } 00:11:03.297 ], 00:11:03.297 "allow_any_host": true, 00:11:03.297 "hosts": [], 00:11:03.297 "serial_number": "SPDK2", 00:11:03.297 "model_number": "SPDK bdev Controller", 00:11:03.297 "max_namespaces": 32, 00:11:03.297 "min_cntlid": 1, 00:11:03.297 "max_cntlid": 65519, 00:11:03.297 "namespaces": [ 00:11:03.297 { 00:11:03.297 "nsid": 1, 00:11:03.297 "bdev_name": "Malloc2", 00:11:03.297 "name": "Malloc2", 00:11:03.297 "nguid": "AC0C8E48600648279F55B05B21823C6E", 00:11:03.297 "uuid": "ac0c8e48-6006-4827-9f55-b05b21823c6e" 00:11:03.297 } 00:11:03.297 ] 00:11:03.297 } 00:11:03.297 ] 00:11:03.555 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:03.555 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2972392 00:11:03.555 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:03.555 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:03.556 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:03.556 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.556 [2024-07-15 11:37:11.450200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.813 Malloc3 00:11:03.813 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:04.071 [2024-07-15 11:37:11.801696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:04.071 11:37:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:04.071 Asynchronous Event Request test 00:11:04.071 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:04.071 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:04.071 Registering asynchronous event callbacks... 00:11:04.071 Starting namespace attribute notice tests for all controllers... 00:11:04.071 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:04.071 aer_cb - Changed Namespace 00:11:04.071 Cleaning up... 00:11:04.330 [ 00:11:04.330 { 00:11:04.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:04.330 "subtype": "Discovery", 00:11:04.330 "listen_addresses": [], 00:11:04.330 "allow_any_host": true, 00:11:04.330 "hosts": [] 00:11:04.330 }, 00:11:04.330 { 00:11:04.330 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:04.330 "subtype": "NVMe", 00:11:04.330 "listen_addresses": [ 00:11:04.330 { 00:11:04.330 "trtype": "VFIOUSER", 00:11:04.330 "adrfam": "IPv4", 00:11:04.330 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:04.330 "trsvcid": "0" 00:11:04.330 } 00:11:04.330 ], 00:11:04.330 "allow_any_host": true, 00:11:04.330 "hosts": [], 00:11:04.330 "serial_number": "SPDK1", 00:11:04.330 "model_number": "SPDK bdev Controller", 00:11:04.330 "max_namespaces": 32, 00:11:04.330 "min_cntlid": 1, 00:11:04.330 "max_cntlid": 65519, 00:11:04.330 "namespaces": [ 00:11:04.330 { 00:11:04.330 "nsid": 1, 00:11:04.330 "bdev_name": "Malloc1", 00:11:04.330 "name": "Malloc1", 00:11:04.330 "nguid": "40486BA397804021B1B048BCCA7DBD8A", 00:11:04.330 "uuid": "40486ba3-9780-4021-b1b0-48bcca7dbd8a" 00:11:04.330 }, 00:11:04.330 { 00:11:04.330 "nsid": 2, 00:11:04.330 "bdev_name": "Malloc3", 00:11:04.330 "name": "Malloc3", 00:11:04.330 "nguid": "A532E5A679B1451690D653DAC7869AA6", 00:11:04.330 "uuid": "a532e5a6-79b1-4516-90d6-53dac7869aa6" 00:11:04.330 } 00:11:04.330 ] 00:11:04.330 }, 00:11:04.330 { 00:11:04.330 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:04.330 "subtype": "NVMe", 00:11:04.330 "listen_addresses": [ 00:11:04.330 { 00:11:04.330 "trtype": "VFIOUSER", 00:11:04.330 "adrfam": "IPv4", 00:11:04.330 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:04.330 "trsvcid": "0" 00:11:04.330 } 00:11:04.330 ], 00:11:04.330 "allow_any_host": true, 00:11:04.330 "hosts": [], 00:11:04.330 "serial_number": "SPDK2", 00:11:04.330 "model_number": "SPDK bdev Controller", 00:11:04.330 "max_namespaces": 32, 00:11:04.330 "min_cntlid": 1, 00:11:04.330 "max_cntlid": 65519, 00:11:04.330 "namespaces": [ 00:11:04.330 { 00:11:04.330 "nsid": 1, 00:11:04.330 "bdev_name": "Malloc2", 00:11:04.330 "name": "Malloc2", 00:11:04.330 "nguid": "AC0C8E48600648279F55B05B21823C6E", 00:11:04.330 "uuid": "ac0c8e48-6006-4827-9f55-b05b21823c6e" 00:11:04.330 } 00:11:04.330 ] 00:11:04.330 } 00:11:04.330 ] 00:11:04.330 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2972392 00:11:04.330 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:04.330 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:04.330 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:04.330 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:04.330 [2024-07-15 11:37:12.095251] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:04.330 [2024-07-15 11:37:12.095294] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972525 ] 00:11:04.330 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.330 [2024-07-15 11:37:12.127884] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:04.330 [2024-07-15 11:37:12.133235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:04.330 [2024-07-15 11:37:12.133263] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faaceab0000 00:11:04.330 [2024-07-15 11:37:12.134236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.135243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.136256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.137266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.138270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.139275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.140281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.141285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:04.330 [2024-07-15 11:37:12.142295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:04.330 [2024-07-15 11:37:12.142315] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faaceaa5000 00:11:04.330 [2024-07-15 11:37:12.143461] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:04.330 [2024-07-15 11:37:12.158244] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:04.330 [2024-07-15 11:37:12.158279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:04.331 [2024-07-15 11:37:12.160375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:04.331 [2024-07-15 11:37:12.160431] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:04.331 [2024-07-15 11:37:12.160520] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:04.331 [2024-07-15 11:37:12.160544] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:04.331 [2024-07-15 11:37:12.160554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:04.331 [2024-07-15 11:37:12.161381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:04.331 [2024-07-15 11:37:12.161402] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:04.331 [2024-07-15 11:37:12.161415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:04.331 [2024-07-15 11:37:12.165748] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:04.331 [2024-07-15 11:37:12.165770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:04.331 [2024-07-15 11:37:12.165785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.166424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:04.331 [2024-07-15 11:37:12.166445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.167428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:04.331 [2024-07-15 11:37:12.167449] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:04.331 [2024-07-15 11:37:12.167458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.167470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.167579] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:04.331 [2024-07-15 11:37:12.167587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.167595] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:04.331 [2024-07-15 11:37:12.168444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:04.331 [2024-07-15 11:37:12.169452] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:04.331 [2024-07-15 11:37:12.170466] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:04.331 [2024-07-15 11:37:12.171454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:04.331 [2024-07-15 11:37:12.171534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:04.331 [2024-07-15 11:37:12.172475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:04.331 [2024-07-15 11:37:12.172496] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:04.331 [2024-07-15 11:37:12.172506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.172529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:04.331 [2024-07-15 11:37:12.172542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.172566] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:04.331 [2024-07-15 11:37:12.172576] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.331 [2024-07-15 11:37:12.172597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.176757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.176781] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:04.331 [2024-07-15 11:37:12.176795] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:04.331 [2024-07-15 11:37:12.176803] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:04.331 [2024-07-15 11:37:12.176811] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:04.331 [2024-07-15 11:37:12.176819] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:04.331 [2024-07-15 11:37:12.176827] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:04.331 [2024-07-15 11:37:12.176835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.176849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.176866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.184749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.184778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.331 [2024-07-15 11:37:12.184793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.331 [2024-07-15 11:37:12.184805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.331 [2024-07-15 11:37:12.184818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.331 [2024-07-15 11:37:12.184827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.184843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.184859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.192749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.192769] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:04.331 [2024-07-15 11:37:12.192778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.192789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.192804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.192819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.200750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.200822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.200838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.200853] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:04.331 [2024-07-15 11:37:12.200861] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:04.331 [2024-07-15 11:37:12.200871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.208764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.208788] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:04.331 [2024-07-15 11:37:12.208809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.208824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.208837] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:04.331 [2024-07-15 11:37:12.208845] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.331 [2024-07-15 11:37:12.208855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.216760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.216790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.216808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.216821] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:04.331 [2024-07-15 11:37:12.216830] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.331 [2024-07-15 11:37:12.216840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.331 [2024-07-15 11:37:12.224762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:04.331 [2024-07-15 11:37:12.224783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:04.331 [2024-07-15 11:37:12.224853] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:04.331 [2024-07-15 11:37:12.224860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:04.332 [2024-07-15 11:37:12.224869] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:04.332 [2024-07-15 11:37:12.224896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.232749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.232775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.240747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.240772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.248762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.248787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.256765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.256799] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:04.332 [2024-07-15 11:37:12.256811] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:04.332 [2024-07-15 11:37:12.256817] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:04.332 [2024-07-15 11:37:12.256823] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:04.332 [2024-07-15 11:37:12.256833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:04.332 [2024-07-15 11:37:12.256845] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:04.332 [2024-07-15 11:37:12.256853] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:04.332 [2024-07-15 11:37:12.256862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.256873] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:04.332 [2024-07-15 11:37:12.256881] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.332 [2024-07-15 11:37:12.256889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.256902] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:04.332 [2024-07-15 11:37:12.256910] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:04.332 [2024-07-15 11:37:12.256918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:04.332 [2024-07-15 11:37:12.264764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.264803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.264821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:04.332 [2024-07-15 11:37:12.264835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:04.332 ===================================================== 00:11:04.332 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:04.332 ===================================================== 00:11:04.332 Controller Capabilities/Features 00:11:04.332 ================================ 00:11:04.332 Vendor ID: 4e58 00:11:04.332 Subsystem Vendor ID: 4e58 00:11:04.332 Serial Number: SPDK2 00:11:04.332 Model Number: SPDK bdev Controller 00:11:04.332 Firmware Version: 24.09 00:11:04.332 Recommended Arb Burst: 6 00:11:04.332 IEEE OUI Identifier: 8d 6b 50 00:11:04.332 Multi-path I/O 00:11:04.332 May have multiple subsystem ports: Yes 00:11:04.332 May have multiple controllers: Yes 00:11:04.332 Associated with SR-IOV VF: No 00:11:04.332 Max Data Transfer Size: 131072 00:11:04.332 Max Number of Namespaces: 32 00:11:04.332 Max Number of I/O Queues: 127 00:11:04.332 NVMe Specification Version (VS): 1.3 00:11:04.332 NVMe Specification Version (Identify): 1.3 00:11:04.332 Maximum Queue Entries: 256 00:11:04.332 Contiguous Queues Required: Yes 00:11:04.332 Arbitration Mechanisms Supported 00:11:04.332 Weighted Round Robin: Not Supported 00:11:04.332 Vendor Specific: Not Supported 00:11:04.332 Reset Timeout: 15000 ms 00:11:04.332 Doorbell Stride: 4 bytes 00:11:04.332 NVM Subsystem Reset: Not Supported 00:11:04.332 Command Sets Supported 00:11:04.332 NVM Command Set: Supported 00:11:04.332 Boot Partition: Not Supported 00:11:04.332 Memory Page Size Minimum: 4096 bytes 00:11:04.332 Memory Page Size Maximum: 4096 bytes 00:11:04.332 Persistent Memory Region: Not Supported 00:11:04.332 Optional Asynchronous Events Supported 00:11:04.332 Namespace Attribute Notices: Supported 00:11:04.332 Firmware Activation Notices: Not Supported 00:11:04.332 ANA Change Notices: Not Supported 00:11:04.332 PLE Aggregate Log Change Notices: Not Supported 00:11:04.332 LBA Status Info Alert Notices: Not Supported 00:11:04.332 EGE Aggregate Log Change Notices: Not Supported 00:11:04.332 Normal NVM Subsystem Shutdown event: Not Supported 00:11:04.332 Zone Descriptor Change Notices: Not Supported 00:11:04.332 Discovery Log Change Notices: Not Supported 00:11:04.332 Controller Attributes 00:11:04.332 128-bit Host Identifier: Supported 00:11:04.332 Non-Operational Permissive Mode: Not Supported 00:11:04.332 NVM Sets: Not Supported 00:11:04.332 Read Recovery Levels: Not Supported 00:11:04.332 Endurance Groups: Not Supported 00:11:04.332 Predictable Latency Mode: Not Supported 00:11:04.332 Traffic Based Keep ALive: Not Supported 00:11:04.332 Namespace Granularity: Not Supported 00:11:04.332 SQ Associations: Not Supported 00:11:04.332 UUID List: Not Supported 00:11:04.332 Multi-Domain Subsystem: Not Supported 00:11:04.332 Fixed Capacity Management: Not Supported 00:11:04.332 Variable Capacity Management: Not Supported 00:11:04.332 Delete Endurance Group: Not Supported 00:11:04.332 Delete NVM Set: Not Supported 00:11:04.332 Extended LBA Formats Supported: Not Supported 00:11:04.332 Flexible Data Placement Supported: Not Supported 00:11:04.332 00:11:04.332 Controller Memory Buffer Support 00:11:04.332 ================================ 00:11:04.332 Supported: No 00:11:04.332 00:11:04.332 Persistent Memory Region Support 00:11:04.332 ================================ 00:11:04.332 Supported: No 00:11:04.332 00:11:04.332 Admin Command Set Attributes 00:11:04.332 ============================ 00:11:04.332 Security Send/Receive: Not Supported 00:11:04.332 Format NVM: Not Supported 00:11:04.332 Firmware Activate/Download: Not Supported 00:11:04.332 Namespace Management: Not Supported 00:11:04.332 Device Self-Test: Not Supported 00:11:04.332 Directives: Not Supported 00:11:04.332 NVMe-MI: Not Supported 00:11:04.332 Virtualization Management: Not Supported 00:11:04.332 Doorbell Buffer Config: Not Supported 00:11:04.332 Get LBA Status Capability: Not Supported 00:11:04.332 Command & Feature Lockdown Capability: Not Supported 00:11:04.332 Abort Command Limit: 4 00:11:04.332 Async Event Request Limit: 4 00:11:04.332 Number of Firmware Slots: N/A 00:11:04.332 Firmware Slot 1 Read-Only: N/A 00:11:04.332 Firmware Activation Without Reset: N/A 00:11:04.332 Multiple Update Detection Support: N/A 00:11:04.332 Firmware Update Granularity: No Information Provided 00:11:04.332 Per-Namespace SMART Log: No 00:11:04.332 Asymmetric Namespace Access Log Page: Not Supported 00:11:04.332 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:04.332 Command Effects Log Page: Supported 00:11:04.332 Get Log Page Extended Data: Supported 00:11:04.332 Telemetry Log Pages: Not Supported 00:11:04.332 Persistent Event Log Pages: Not Supported 00:11:04.332 Supported Log Pages Log Page: May Support 00:11:04.332 Commands Supported & Effects Log Page: Not Supported 00:11:04.332 Feature Identifiers & Effects Log Page:May Support 00:11:04.332 NVMe-MI Commands & Effects Log Page: May Support 00:11:04.332 Data Area 4 for Telemetry Log: Not Supported 00:11:04.332 Error Log Page Entries Supported: 128 00:11:04.332 Keep Alive: Supported 00:11:04.332 Keep Alive Granularity: 10000 ms 00:11:04.332 00:11:04.332 NVM Command Set Attributes 00:11:04.332 ========================== 00:11:04.332 Submission Queue Entry Size 00:11:04.332 Max: 64 00:11:04.332 Min: 64 00:11:04.332 Completion Queue Entry Size 00:11:04.332 Max: 16 00:11:04.332 Min: 16 00:11:04.332 Number of Namespaces: 32 00:11:04.332 Compare Command: Supported 00:11:04.332 Write Uncorrectable Command: Not Supported 00:11:04.332 Dataset Management Command: Supported 00:11:04.332 Write Zeroes Command: Supported 00:11:04.332 Set Features Save Field: Not Supported 00:11:04.332 Reservations: Not Supported 00:11:04.332 Timestamp: Not Supported 00:11:04.332 Copy: Supported 00:11:04.332 Volatile Write Cache: Present 00:11:04.332 Atomic Write Unit (Normal): 1 00:11:04.332 Atomic Write Unit (PFail): 1 00:11:04.332 Atomic Compare & Write Unit: 1 00:11:04.332 Fused Compare & Write: Supported 00:11:04.332 Scatter-Gather List 00:11:04.332 SGL Command Set: Supported (Dword aligned) 00:11:04.332 SGL Keyed: Not Supported 00:11:04.332 SGL Bit Bucket Descriptor: Not Supported 00:11:04.332 SGL Metadata Pointer: Not Supported 00:11:04.332 Oversized SGL: Not Supported 00:11:04.332 SGL Metadata Address: Not Supported 00:11:04.332 SGL Offset: Not Supported 00:11:04.332 Transport SGL Data Block: Not Supported 00:11:04.332 Replay Protected Memory Block: Not Supported 00:11:04.332 00:11:04.332 Firmware Slot Information 00:11:04.332 ========================= 00:11:04.332 Active slot: 1 00:11:04.333 Slot 1 Firmware Revision: 24.09 00:11:04.333 00:11:04.333 00:11:04.333 Commands Supported and Effects 00:11:04.333 ============================== 00:11:04.333 Admin Commands 00:11:04.333 -------------- 00:11:04.333 Get Log Page (02h): Supported 00:11:04.333 Identify (06h): Supported 00:11:04.333 Abort (08h): Supported 00:11:04.333 Set Features (09h): Supported 00:11:04.333 Get Features (0Ah): Supported 00:11:04.333 Asynchronous Event Request (0Ch): Supported 00:11:04.333 Keep Alive (18h): Supported 00:11:04.333 I/O Commands 00:11:04.333 ------------ 00:11:04.333 Flush (00h): Supported LBA-Change 00:11:04.333 Write (01h): Supported LBA-Change 00:11:04.333 Read (02h): Supported 00:11:04.333 Compare (05h): Supported 00:11:04.333 Write Zeroes (08h): Supported LBA-Change 00:11:04.333 Dataset Management (09h): Supported LBA-Change 00:11:04.333 Copy (19h): Supported LBA-Change 00:11:04.333 00:11:04.333 Error Log 00:11:04.333 ========= 00:11:04.333 00:11:04.333 Arbitration 00:11:04.333 =========== 00:11:04.333 Arbitration Burst: 1 00:11:04.333 00:11:04.333 Power Management 00:11:04.333 ================ 00:11:04.333 Number of Power States: 1 00:11:04.333 Current Power State: Power State #0 00:11:04.333 Power State #0: 00:11:04.333 Max Power: 0.00 W 00:11:04.333 Non-Operational State: Operational 00:11:04.333 Entry Latency: Not Reported 00:11:04.333 Exit Latency: Not Reported 00:11:04.333 Relative Read Throughput: 0 00:11:04.333 Relative Read Latency: 0 00:11:04.333 Relative Write Throughput: 0 00:11:04.333 Relative Write Latency: 0 00:11:04.333 Idle Power: Not Reported 00:11:04.333 Active Power: Not Reported 00:11:04.333 Non-Operational Permissive Mode: Not Supported 00:11:04.333 00:11:04.333 Health Information 00:11:04.333 ================== 00:11:04.333 Critical Warnings: 00:11:04.333 Available Spare Space: OK 00:11:04.333 Temperature: OK 00:11:04.333 Device Reliability: OK 00:11:04.333 Read Only: No 00:11:04.333 Volatile Memory Backup: OK 00:11:04.333 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:04.333 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:04.333 Available Spare: 0% 00:11:04.333 Available Sp[2024-07-15 11:37:12.264956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:04.333 [2024-07-15 11:37:12.272758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:04.333 [2024-07-15 11:37:12.272825] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:04.333 [2024-07-15 11:37:12.272844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.333 [2024-07-15 11:37:12.272856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.333 [2024-07-15 11:37:12.272866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.333 [2024-07-15 11:37:12.272876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.333 [2024-07-15 11:37:12.272961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:04.333 [2024-07-15 11:37:12.272983] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:04.333 [2024-07-15 11:37:12.273965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:04.333 [2024-07-15 11:37:12.274050] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:04.333 [2024-07-15 11:37:12.274080] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:04.333 [2024-07-15 11:37:12.274972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:04.333 [2024-07-15 11:37:12.274996] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:04.333 [2024-07-15 11:37:12.275064] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:04.333 [2024-07-15 11:37:12.277751] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:04.592 are Threshold: 0% 00:11:04.592 Life Percentage Used: 0% 00:11:04.592 Data Units Read: 0 00:11:04.592 Data Units Written: 0 00:11:04.592 Host Read Commands: 0 00:11:04.592 Host Write Commands: 0 00:11:04.592 Controller Busy Time: 0 minutes 00:11:04.592 Power Cycles: 0 00:11:04.592 Power On Hours: 0 hours 00:11:04.592 Unsafe Shutdowns: 0 00:11:04.592 Unrecoverable Media Errors: 0 00:11:04.592 Lifetime Error Log Entries: 0 00:11:04.592 Warning Temperature Time: 0 minutes 00:11:04.592 Critical Temperature Time: 0 minutes 00:11:04.592 00:11:04.592 Number of Queues 00:11:04.592 ================ 00:11:04.592 Number of I/O Submission Queues: 127 00:11:04.592 Number of I/O Completion Queues: 127 00:11:04.592 00:11:04.592 Active Namespaces 00:11:04.592 ================= 00:11:04.592 Namespace ID:1 00:11:04.592 Error Recovery Timeout: Unlimited 00:11:04.592 Command Set Identifier: NVM (00h) 00:11:04.592 Deallocate: Supported 00:11:04.592 Deallocated/Unwritten Error: Not Supported 00:11:04.592 Deallocated Read Value: Unknown 00:11:04.592 Deallocate in Write Zeroes: Not Supported 00:11:04.592 Deallocated Guard Field: 0xFFFF 00:11:04.592 Flush: Supported 00:11:04.592 Reservation: Supported 00:11:04.592 Namespace Sharing Capabilities: Multiple Controllers 00:11:04.592 Size (in LBAs): 131072 (0GiB) 00:11:04.592 Capacity (in LBAs): 131072 (0GiB) 00:11:04.592 Utilization (in LBAs): 131072 (0GiB) 00:11:04.592 NGUID: AC0C8E48600648279F55B05B21823C6E 00:11:04.592 UUID: ac0c8e48-6006-4827-9f55-b05b21823c6e 00:11:04.592 Thin Provisioning: Not Supported 00:11:04.592 Per-NS Atomic Units: Yes 00:11:04.592 Atomic Boundary Size (Normal): 0 00:11:04.592 Atomic Boundary Size (PFail): 0 00:11:04.592 Atomic Boundary Offset: 0 00:11:04.592 Maximum Single Source Range Length: 65535 00:11:04.592 Maximum Copy Length: 65535 00:11:04.592 Maximum Source Range Count: 1 00:11:04.592 NGUID/EUI64 Never Reused: No 00:11:04.592 Namespace Write Protected: No 00:11:04.592 Number of LBA Formats: 1 00:11:04.592 Current LBA Format: LBA Format #00 00:11:04.592 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:04.592 00:11:04.592 11:37:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:04.592 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.592 [2024-07-15 11:37:12.510194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:09.864 Initializing NVMe Controllers 00:11:09.864 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:09.864 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:09.864 Initialization complete. Launching workers. 00:11:09.864 ======================================================== 00:11:09.864 Latency(us) 00:11:09.864 Device Information : IOPS MiB/s Average min max 00:11:09.864 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34699.66 135.55 3688.21 1144.64 8132.62 00:11:09.864 ======================================================== 00:11:09.864 Total : 34699.66 135.55 3688.21 1144.64 8132.62 00:11:09.864 00:11:09.864 [2024-07-15 11:37:17.614090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:09.864 11:37:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:09.864 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.864 [2024-07-15 11:37:17.850744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:15.138 Initializing NVMe Controllers 00:11:15.138 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:15.138 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:15.138 Initialization complete. Launching workers. 00:11:15.138 ======================================================== 00:11:15.138 Latency(us) 00:11:15.138 Device Information : IOPS MiB/s Average min max 00:11:15.138 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32058.35 125.23 3992.63 1202.98 9891.75 00:11:15.138 ======================================================== 00:11:15.138 Total : 32058.35 125.23 3992.63 1202.98 9891.75 00:11:15.138 00:11:15.138 [2024-07-15 11:37:22.870109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.138 11:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:15.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.138 [2024-07-15 11:37:23.083013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:20.409 [2024-07-15 11:37:28.224893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:20.409 Initializing NVMe Controllers 00:11:20.409 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.409 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:20.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:20.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:20.409 Initialization complete. Launching workers. 00:11:20.409 Starting thread on core 2 00:11:20.409 Starting thread on core 3 00:11:20.409 Starting thread on core 1 00:11:20.409 11:37:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:20.409 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.668 [2024-07-15 11:37:28.528243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.952 [2024-07-15 11:37:31.594828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:23.952 Initializing NVMe Controllers 00:11:23.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:23.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:23.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:23.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:23.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:23.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:23.952 Initialization complete. Launching workers. 00:11:23.952 Starting thread on core 1 with urgent priority queue 00:11:23.952 Starting thread on core 2 with urgent priority queue 00:11:23.952 Starting thread on core 3 with urgent priority queue 00:11:23.952 Starting thread on core 0 with urgent priority queue 00:11:23.952 SPDK bdev Controller (SPDK2 ) core 0: 2422.33 IO/s 41.28 secs/100000 ios 00:11:23.952 SPDK bdev Controller (SPDK2 ) core 1: 2751.67 IO/s 36.34 secs/100000 ios 00:11:23.952 SPDK bdev Controller (SPDK2 ) core 2: 2824.00 IO/s 35.41 secs/100000 ios 00:11:23.952 SPDK bdev Controller (SPDK2 ) core 3: 2889.00 IO/s 34.61 secs/100000 ios 00:11:23.952 ======================================================== 00:11:23.952 00:11:23.952 11:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:23.952 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.952 [2024-07-15 11:37:31.902217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.952 Initializing NVMe Controllers 00:11:23.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.952 Namespace ID: 1 size: 0GB 00:11:23.952 Initialization complete. 00:11:23.952 INFO: using host memory buffer for IO 00:11:23.952 Hello world! 00:11:23.952 [2024-07-15 11:37:31.912270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:24.212 11:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:24.212 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.470 [2024-07-15 11:37:32.207084] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.407 Initializing NVMe Controllers 00:11:25.407 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.407 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.407 Initialization complete. Launching workers. 00:11:25.407 submit (in ns) avg, min, max = 7254.5, 3512.2, 4015338.9 00:11:25.407 complete (in ns) avg, min, max = 26244.4, 2055.6, 4018255.6 00:11:25.407 00:11:25.407 Submit histogram 00:11:25.407 ================ 00:11:25.407 Range in us Cumulative Count 00:11:25.407 3.508 - 3.532: 0.4433% ( 59) 00:11:25.407 3.532 - 3.556: 1.0595% ( 82) 00:11:25.407 3.556 - 3.579: 2.9531% ( 252) 00:11:25.407 3.579 - 3.603: 7.0860% ( 550) 00:11:25.407 3.603 - 3.627: 13.2326% ( 818) 00:11:25.407 3.627 - 3.650: 22.2573% ( 1201) 00:11:25.407 3.650 - 3.674: 31.5524% ( 1237) 00:11:25.407 3.674 - 3.698: 40.6222% ( 1207) 00:11:25.407 3.698 - 3.721: 48.5723% ( 1058) 00:11:25.407 3.721 - 3.745: 54.3057% ( 763) 00:11:25.407 3.745 - 3.769: 59.1900% ( 650) 00:11:25.407 3.769 - 3.793: 63.7887% ( 612) 00:11:25.407 3.793 - 3.816: 67.5233% ( 497) 00:11:25.407 3.816 - 3.840: 71.1301% ( 480) 00:11:25.407 3.840 - 3.864: 74.6919% ( 474) 00:11:25.407 3.864 - 3.887: 78.2387% ( 472) 00:11:25.407 3.887 - 3.911: 81.6426% ( 453) 00:11:25.407 3.911 - 3.935: 84.7310% ( 411) 00:11:25.407 3.935 - 3.959: 86.8951% ( 288) 00:11:25.407 3.959 - 3.982: 88.8864% ( 265) 00:11:25.407 3.982 - 4.006: 90.5245% ( 218) 00:11:25.407 4.006 - 4.030: 91.8019% ( 170) 00:11:25.407 4.030 - 4.053: 93.1169% ( 175) 00:11:25.407 4.053 - 4.077: 94.0261% ( 121) 00:11:25.407 4.077 - 4.101: 94.7701% ( 99) 00:11:25.407 4.101 - 4.124: 95.4313% ( 88) 00:11:25.407 4.124 - 4.148: 95.8296% ( 53) 00:11:25.407 4.148 - 4.172: 96.1076% ( 37) 00:11:25.407 4.172 - 4.196: 96.3931% ( 38) 00:11:25.407 4.196 - 4.219: 96.5585% ( 22) 00:11:25.407 4.219 - 4.243: 96.6862% ( 17) 00:11:25.407 4.243 - 4.267: 96.7689% ( 11) 00:11:25.407 4.267 - 4.290: 96.8816% ( 15) 00:11:25.407 4.290 - 4.314: 96.9492% ( 9) 00:11:25.407 4.314 - 4.338: 97.0319% ( 11) 00:11:25.407 4.338 - 4.361: 97.0769% ( 6) 00:11:25.407 4.361 - 4.385: 97.1446% ( 9) 00:11:25.407 4.385 - 4.409: 97.2197% ( 10) 00:11:25.407 4.409 - 4.433: 97.2873% ( 9) 00:11:25.407 4.433 - 4.456: 97.3099% ( 3) 00:11:25.407 4.504 - 4.527: 97.3174% ( 1) 00:11:25.407 4.527 - 4.551: 97.3249% ( 1) 00:11:25.407 4.551 - 4.575: 97.3324% ( 1) 00:11:25.407 4.599 - 4.622: 97.3399% ( 1) 00:11:25.407 4.622 - 4.646: 97.3475% ( 1) 00:11:25.407 4.646 - 4.670: 97.3550% ( 1) 00:11:25.407 4.670 - 4.693: 97.3700% ( 2) 00:11:25.407 4.693 - 4.717: 97.3850% ( 2) 00:11:25.407 4.717 - 4.741: 97.4301% ( 6) 00:11:25.407 4.741 - 4.764: 97.4677% ( 5) 00:11:25.407 4.764 - 4.788: 97.5053% ( 5) 00:11:25.407 4.788 - 4.812: 97.5353% ( 4) 00:11:25.407 4.812 - 4.836: 97.5804% ( 6) 00:11:25.407 4.836 - 4.859: 97.5954% ( 2) 00:11:25.407 4.859 - 4.883: 97.6330% ( 5) 00:11:25.407 4.883 - 4.907: 97.6781% ( 6) 00:11:25.407 4.907 - 4.930: 97.7081% ( 4) 00:11:25.407 4.930 - 4.954: 97.7532% ( 6) 00:11:25.407 4.954 - 4.978: 97.8209% ( 9) 00:11:25.407 4.978 - 5.001: 97.8584% ( 5) 00:11:25.407 5.001 - 5.025: 97.8960% ( 5) 00:11:25.407 5.025 - 5.049: 97.9411% ( 6) 00:11:25.407 5.049 - 5.073: 97.9787% ( 5) 00:11:25.407 5.073 - 5.096: 98.0012% ( 3) 00:11:25.407 5.096 - 5.120: 98.0313% ( 4) 00:11:25.407 5.120 - 5.144: 98.0613% ( 4) 00:11:25.407 5.144 - 5.167: 98.0763% ( 2) 00:11:25.407 5.167 - 5.191: 98.1139% ( 5) 00:11:25.407 5.191 - 5.215: 98.1515% ( 5) 00:11:25.407 5.239 - 5.262: 98.1665% ( 2) 00:11:25.407 5.262 - 5.286: 98.1740% ( 1) 00:11:25.407 5.310 - 5.333: 98.1815% ( 1) 00:11:25.407 5.333 - 5.357: 98.1891% ( 1) 00:11:25.407 5.357 - 5.381: 98.1966% ( 1) 00:11:25.407 5.404 - 5.428: 98.2041% ( 1) 00:11:25.407 5.428 - 5.452: 98.2116% ( 1) 00:11:25.407 5.452 - 5.476: 98.2191% ( 1) 00:11:25.407 5.570 - 5.594: 98.2266% ( 1) 00:11:25.407 5.665 - 5.689: 98.2341% ( 1) 00:11:25.408 5.973 - 5.997: 98.2417% ( 1) 00:11:25.408 6.021 - 6.044: 98.2492% ( 1) 00:11:25.408 6.116 - 6.163: 98.2717% ( 3) 00:11:25.408 6.305 - 6.353: 98.2792% ( 1) 00:11:25.408 6.400 - 6.447: 98.2943% ( 2) 00:11:25.408 6.542 - 6.590: 98.3018% ( 1) 00:11:25.408 6.637 - 6.684: 98.3093% ( 1) 00:11:25.408 6.732 - 6.779: 98.3168% ( 1) 00:11:25.408 6.827 - 6.874: 98.3318% ( 2) 00:11:25.408 6.874 - 6.921: 98.3393% ( 1) 00:11:25.408 6.921 - 6.969: 98.3469% ( 1) 00:11:25.408 7.064 - 7.111: 98.3544% ( 1) 00:11:25.408 7.159 - 7.206: 98.3694% ( 2) 00:11:25.408 7.206 - 7.253: 98.4070% ( 5) 00:11:25.408 7.253 - 7.301: 98.4145% ( 1) 00:11:25.408 7.443 - 7.490: 98.4220% ( 1) 00:11:25.408 7.490 - 7.538: 98.4370% ( 2) 00:11:25.408 7.538 - 7.585: 98.4521% ( 2) 00:11:25.408 7.585 - 7.633: 98.4596% ( 1) 00:11:25.408 7.680 - 7.727: 98.4671% ( 1) 00:11:25.408 7.727 - 7.775: 98.4896% ( 3) 00:11:25.408 7.822 - 7.870: 98.5122% ( 3) 00:11:25.408 7.870 - 7.917: 98.5197% ( 1) 00:11:25.408 7.917 - 7.964: 98.5347% ( 2) 00:11:25.408 7.964 - 8.012: 98.5497% ( 2) 00:11:25.408 8.059 - 8.107: 98.5573% ( 1) 00:11:25.408 8.107 - 8.154: 98.5723% ( 2) 00:11:25.408 8.154 - 8.201: 98.5798% ( 1) 00:11:25.408 8.249 - 8.296: 98.5948% ( 2) 00:11:25.408 8.296 - 8.344: 98.6099% ( 2) 00:11:25.408 8.581 - 8.628: 98.6174% ( 1) 00:11:25.408 8.628 - 8.676: 98.6249% ( 1) 00:11:25.408 8.723 - 8.770: 98.6399% ( 2) 00:11:25.408 8.865 - 8.913: 98.6549% ( 2) 00:11:25.408 8.913 - 8.960: 98.6775% ( 3) 00:11:25.408 8.960 - 9.007: 98.6850% ( 1) 00:11:25.408 9.055 - 9.102: 98.7000% ( 2) 00:11:25.408 9.102 - 9.150: 98.7075% ( 1) 00:11:25.408 9.150 - 9.197: 98.7226% ( 2) 00:11:25.408 9.244 - 9.292: 98.7301% ( 1) 00:11:25.408 9.292 - 9.339: 98.7376% ( 1) 00:11:25.408 9.339 - 9.387: 98.7451% ( 1) 00:11:25.408 9.387 - 9.434: 98.7526% ( 1) 00:11:25.408 9.434 - 9.481: 98.7601% ( 1) 00:11:25.408 9.481 - 9.529: 98.7677% ( 1) 00:11:25.408 9.576 - 9.624: 98.7752% ( 1) 00:11:25.408 9.766 - 9.813: 98.7827% ( 1) 00:11:25.408 9.956 - 10.003: 98.7902% ( 1) 00:11:25.408 10.003 - 10.050: 98.8127% ( 3) 00:11:25.408 10.050 - 10.098: 98.8203% ( 1) 00:11:25.408 10.430 - 10.477: 98.8353% ( 2) 00:11:25.408 10.477 - 10.524: 98.8428% ( 1) 00:11:25.408 10.524 - 10.572: 98.8503% ( 1) 00:11:25.408 10.619 - 10.667: 98.8578% ( 1) 00:11:25.408 10.667 - 10.714: 98.8729% ( 2) 00:11:25.408 10.714 - 10.761: 98.8804% ( 1) 00:11:25.408 10.761 - 10.809: 98.8879% ( 1) 00:11:25.408 10.904 - 10.951: 98.9029% ( 2) 00:11:25.408 11.093 - 11.141: 98.9255% ( 3) 00:11:25.408 12.041 - 12.089: 98.9405% ( 2) 00:11:25.408 12.089 - 12.136: 98.9480% ( 1) 00:11:25.408 12.895 - 12.990: 98.9555% ( 1) 00:11:25.408 12.990 - 13.084: 98.9630% ( 1) 00:11:25.408 13.084 - 13.179: 98.9705% ( 1) 00:11:25.408 13.274 - 13.369: 98.9781% ( 1) 00:11:25.408 13.369 - 13.464: 98.9856% ( 1) 00:11:25.408 13.748 - 13.843: 98.9931% ( 1) 00:11:25.408 13.843 - 13.938: 99.0006% ( 1) 00:11:25.408 14.033 - 14.127: 99.0156% ( 2) 00:11:25.408 14.222 - 14.317: 99.0307% ( 2) 00:11:25.408 14.317 - 14.412: 99.0382% ( 1) 00:11:25.408 14.507 - 14.601: 99.0457% ( 1) 00:11:25.408 14.601 - 14.696: 99.0532% ( 1) 00:11:25.408 14.886 - 14.981: 99.0757% ( 3) 00:11:25.408 16.308 - 16.403: 99.0833% ( 1) 00:11:25.408 17.161 - 17.256: 99.0908% ( 1) 00:11:25.408 17.256 - 17.351: 99.0983% ( 1) 00:11:25.408 17.351 - 17.446: 99.1133% ( 2) 00:11:25.408 17.446 - 17.541: 99.1434% ( 4) 00:11:25.408 17.541 - 17.636: 99.1584% ( 2) 00:11:25.408 17.636 - 17.730: 99.2035% ( 6) 00:11:25.408 17.730 - 17.825: 99.2636% ( 8) 00:11:25.408 17.825 - 17.920: 99.3162% ( 7) 00:11:25.408 17.920 - 18.015: 99.3688% ( 7) 00:11:25.408 18.015 - 18.110: 99.4364% ( 9) 00:11:25.408 18.110 - 18.204: 99.4590% ( 3) 00:11:25.408 18.204 - 18.299: 99.5191% ( 8) 00:11:25.408 18.299 - 18.394: 99.5642% ( 6) 00:11:25.408 18.394 - 18.489: 99.6393% ( 10) 00:11:25.408 18.489 - 18.584: 99.6844% ( 6) 00:11:25.408 18.584 - 18.679: 99.7069% ( 3) 00:11:25.408 18.679 - 18.773: 99.7295% ( 3) 00:11:25.408 18.773 - 18.868: 99.7445% ( 2) 00:11:25.408 18.868 - 18.963: 99.7821% ( 5) 00:11:25.408 18.963 - 19.058: 99.8046% ( 3) 00:11:25.408 19.247 - 19.342: 99.8121% ( 1) 00:11:25.408 19.437 - 19.532: 99.8197% ( 1) 00:11:25.408 19.532 - 19.627: 99.8347% ( 2) 00:11:25.408 19.627 - 19.721: 99.8422% ( 1) 00:11:25.408 20.480 - 20.575: 99.8497% ( 1) 00:11:25.408 20.575 - 20.670: 99.8572% ( 1) 00:11:25.408 21.239 - 21.333: 99.8647% ( 1) 00:11:25.408 21.997 - 22.092: 99.8723% ( 1) 00:11:25.408 22.756 - 22.850: 99.8798% ( 1) 00:11:25.408 23.988 - 24.083: 99.8873% ( 1) 00:11:25.408 28.444 - 28.634: 99.8948% ( 1) 00:11:25.408 28.634 - 28.824: 99.9023% ( 1) 00:11:25.408 29.203 - 29.393: 99.9098% ( 1) 00:11:25.408 29.772 - 29.961: 99.9173% ( 1) 00:11:25.408 3980.705 - 4004.978: 99.9850% ( 9) 00:11:25.408 4004.978 - 4029.250: 100.0000% ( 2) 00:11:25.408 00:11:25.408 Complete histogram 00:11:25.408 ================== 00:11:25.408 Range in us Cumulative Count 00:11:25.408 2.050 - 2.062: 0.5786% ( 77) 00:11:25.408 2.062 - 2.074: 35.2946% ( 4620) 00:11:25.408 2.074 - 2.086: 44.2290% ( 1189) 00:11:25.408 2.086 - 2.098: 47.4602% ( 430) 00:11:25.408 2.098 - 2.110: 59.4304% ( 1593) 00:11:25.408 2.110 - 2.121: 62.3610% ( 390) 00:11:25.408 2.121 - 2.133: 67.2152% ( 646) 00:11:25.408 2.133 - 2.145: 79.6589% ( 1656) 00:11:25.408 2.145 - 2.157: 81.4698% ( 241) 00:11:25.408 2.157 - 2.169: 84.2726% ( 373) 00:11:25.408 2.169 - 2.181: 88.0298% ( 500) 00:11:25.408 2.181 - 2.193: 89.1043% ( 143) 00:11:25.408 2.193 - 2.204: 89.9459% ( 112) 00:11:25.408 2.204 - 2.216: 91.4563% ( 201) 00:11:25.408 2.216 - 2.228: 93.2372% ( 237) 00:11:25.408 2.228 - 2.240: 94.3117% ( 143) 00:11:25.408 2.240 - 2.252: 94.9880% ( 90) 00:11:25.408 2.252 - 2.264: 95.2059% ( 29) 00:11:25.408 2.264 - 2.276: 95.3261% ( 16) 00:11:25.408 2.276 - 2.287: 95.4914% ( 22) 00:11:25.408 2.287 - 2.299: 95.7770% ( 38) 00:11:25.408 2.299 - 2.311: 96.0099% ( 31) 00:11:25.408 2.311 - 2.323: 96.0926% ( 11) 00:11:25.408 2.323 - 2.335: 96.1301% ( 5) 00:11:25.408 2.335 - 2.347: 96.1752% ( 6) 00:11:25.408 2.347 - 2.359: 96.3481% ( 23) 00:11:25.408 2.359 - 2.370: 96.5585% ( 28) 00:11:25.408 2.370 - 2.382: 96.8215% ( 35) 00:11:25.408 2.382 - 2.394: 97.1671% ( 46) 00:11:25.408 2.394 - 2.406: 97.4301% ( 35) 00:11:25.408 2.406 - 2.418: 97.6706% ( 32) 00:11:25.408 2.418 - 2.430: 97.8659% ( 26) 00:11:25.408 2.430 - 2.441: 97.9862% ( 16) 00:11:25.408 2.441 - 2.453: 98.0763% ( 12) 00:11:25.408 2.453 - 2.465: 98.1966% ( 16) 00:11:25.408 2.465 - 2.477: 98.2191% ( 3) 00:11:25.408 2.477 - 2.489: 98.2341% ( 2) 00:11:25.408 2.489 - 2.501: 98.2867% ( 7) 00:11:25.408 2.501 - 2.513: 98.3318% ( 6) 00:11:25.408 2.513 - 2.524: 98.3694% ( 5) 00:11:25.408 2.524 - 2.536: 98.3844% ( 2) 00:11:25.408 2.536 - 2.548: 98.4220% ( 5) 00:11:25.408 2.572 - 2.584: 98.4370% ( 2) 00:11:25.408 2.619 - 2.631: 98.4445% ( 1) 00:11:25.408 2.631 - 2.643: 98.4521% ( 1) 00:11:25.408 2.655 - 2.667: 98.4596% ( 1) 00:11:25.408 2.667 - 2.679: 98.4671% ( 1) 00:11:25.408 2.679 - 2.690: 98.4746% ( 1) 00:11:25.408 2.702 - 2.714: 98.4821% ( 1) 00:11:25.408 2.726 - 2.738: 98.4896% ( 1) 00:11:25.408 2.833 - 2.844: 98.4971% ( 1) 00:11:25.408 3.461 - 3.484: 98.5197% ( 3) 00:11:25.408 3.484 - 3.508: 98.5347% ( 2) 00:11:25.408 3.508 - 3.532: 98.5422% ( 1) 00:11:25.408 3.532 - 3.556: 98.5573% ( 2) 00:11:25.408 3.556 - 3.579: 98.5648% ( 1) 00:11:25.408 3.579 - 3.603: 98.5723% ( 1) 00:11:25.408 3.603 - 3.627: 98.5873% ( 2) 00:11:25.408 3.627 - 3.650: 98.5948% ( 1) 00:11:25.408 3.650 - 3.674: 98.6023% ( 1) 00:11:25.408 3.674 - 3.698: 98.6174% ( 2) 00:11:25.408 3.721 - 3.745: 9[2024-07-15 11:37:33.308507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.408 8.6249% ( 1) 00:11:25.408 3.769 - 3.793: 98.6324% ( 1) 00:11:25.408 3.793 - 3.816: 98.6399% ( 1) 00:11:25.408 3.816 - 3.840: 98.6549% ( 2) 00:11:25.408 3.887 - 3.911: 98.6700% ( 2) 00:11:25.408 3.935 - 3.959: 98.6775% ( 1) 00:11:25.408 3.959 - 3.982: 98.6850% ( 1) 00:11:25.408 3.982 - 4.006: 98.7000% ( 2) 00:11:25.408 4.006 - 4.030: 98.7075% ( 1) 00:11:25.408 4.030 - 4.053: 98.7151% ( 1) 00:11:25.408 4.124 - 4.148: 98.7301% ( 2) 00:11:25.408 4.338 - 4.361: 98.7376% ( 1) 00:11:25.408 5.049 - 5.073: 98.7451% ( 1) 00:11:25.408 5.167 - 5.191: 98.7526% ( 1) 00:11:25.408 5.262 - 5.286: 98.7601% ( 1) 00:11:25.408 5.381 - 5.404: 98.7677% ( 1) 00:11:25.408 5.547 - 5.570: 98.7752% ( 1) 00:11:25.408 5.641 - 5.665: 98.7827% ( 1) 00:11:25.408 5.713 - 5.736: 98.7902% ( 1) 00:11:25.408 5.784 - 5.807: 98.8052% ( 2) 00:11:25.409 5.973 - 5.997: 98.8127% ( 1) 00:11:25.409 6.068 - 6.116: 98.8203% ( 1) 00:11:25.409 6.258 - 6.305: 98.8278% ( 1) 00:11:25.409 6.305 - 6.353: 98.8353% ( 1) 00:11:25.409 6.779 - 6.827: 98.8428% ( 1) 00:11:25.409 6.827 - 6.874: 98.8503% ( 1) 00:11:25.409 7.016 - 7.064: 98.8578% ( 1) 00:11:25.409 7.111 - 7.159: 98.8653% ( 1) 00:11:25.409 7.206 - 7.253: 98.8729% ( 1) 00:11:25.409 7.253 - 7.301: 98.8804% ( 1) 00:11:25.409 7.301 - 7.348: 98.8879% ( 1) 00:11:25.409 7.775 - 7.822: 98.8954% ( 1) 00:11:25.409 8.344 - 8.391: 98.9104% ( 2) 00:11:25.409 11.852 - 11.899: 98.9179% ( 1) 00:11:25.409 15.644 - 15.739: 98.9330% ( 2) 00:11:25.409 15.739 - 15.834: 98.9405% ( 1) 00:11:25.409 15.929 - 16.024: 98.9555% ( 2) 00:11:25.409 16.024 - 16.119: 99.0006% ( 6) 00:11:25.409 16.119 - 16.213: 99.0156% ( 2) 00:11:25.409 16.213 - 16.308: 99.0382% ( 3) 00:11:25.409 16.308 - 16.403: 99.0532% ( 2) 00:11:25.409 16.403 - 16.498: 99.0833% ( 4) 00:11:25.409 16.498 - 16.593: 99.0983% ( 2) 00:11:25.409 16.593 - 16.687: 99.1434% ( 6) 00:11:25.409 16.687 - 16.782: 99.2035% ( 8) 00:11:25.409 16.782 - 16.877: 99.2561% ( 7) 00:11:25.409 16.877 - 16.972: 99.2786% ( 3) 00:11:25.409 16.972 - 17.067: 99.3012% ( 3) 00:11:25.409 17.067 - 17.161: 99.3162% ( 2) 00:11:25.409 17.161 - 17.256: 99.3463% ( 4) 00:11:25.409 17.256 - 17.351: 99.3538% ( 1) 00:11:25.409 17.351 - 17.446: 99.3763% ( 3) 00:11:25.409 17.446 - 17.541: 99.3838% ( 1) 00:11:25.409 17.541 - 17.636: 99.3989% ( 2) 00:11:25.409 3980.705 - 4004.978: 99.9023% ( 67) 00:11:25.409 4004.978 - 4029.250: 100.0000% ( 13) 00:11:25.409 00:11:25.409 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:25.409 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:25.409 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:25.409 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:25.409 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:25.667 [ 00:11:25.667 { 00:11:25.667 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.667 "subtype": "Discovery", 00:11:25.667 "listen_addresses": [], 00:11:25.667 "allow_any_host": true, 00:11:25.667 "hosts": [] 00:11:25.667 }, 00:11:25.667 { 00:11:25.667 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:25.667 "subtype": "NVMe", 00:11:25.667 "listen_addresses": [ 00:11:25.667 { 00:11:25.667 "trtype": "VFIOUSER", 00:11:25.667 "adrfam": "IPv4", 00:11:25.667 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:25.667 "trsvcid": "0" 00:11:25.667 } 00:11:25.667 ], 00:11:25.667 "allow_any_host": true, 00:11:25.667 "hosts": [], 00:11:25.667 "serial_number": "SPDK1", 00:11:25.667 "model_number": "SPDK bdev Controller", 00:11:25.667 "max_namespaces": 32, 00:11:25.667 "min_cntlid": 1, 00:11:25.667 "max_cntlid": 65519, 00:11:25.667 "namespaces": [ 00:11:25.667 { 00:11:25.667 "nsid": 1, 00:11:25.667 "bdev_name": "Malloc1", 00:11:25.667 "name": "Malloc1", 00:11:25.667 "nguid": "40486BA397804021B1B048BCCA7DBD8A", 00:11:25.667 "uuid": "40486ba3-9780-4021-b1b0-48bcca7dbd8a" 00:11:25.667 }, 00:11:25.667 { 00:11:25.667 "nsid": 2, 00:11:25.667 "bdev_name": "Malloc3", 00:11:25.667 "name": "Malloc3", 00:11:25.667 "nguid": "A532E5A679B1451690D653DAC7869AA6", 00:11:25.667 "uuid": "a532e5a6-79b1-4516-90d6-53dac7869aa6" 00:11:25.667 } 00:11:25.667 ] 00:11:25.667 }, 00:11:25.667 { 00:11:25.667 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:25.667 "subtype": "NVMe", 00:11:25.667 "listen_addresses": [ 00:11:25.667 { 00:11:25.667 "trtype": "VFIOUSER", 00:11:25.667 "adrfam": "IPv4", 00:11:25.667 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:25.667 "trsvcid": "0" 00:11:25.667 } 00:11:25.667 ], 00:11:25.667 "allow_any_host": true, 00:11:25.667 "hosts": [], 00:11:25.667 "serial_number": "SPDK2", 00:11:25.667 "model_number": "SPDK bdev Controller", 00:11:25.667 "max_namespaces": 32, 00:11:25.667 "min_cntlid": 1, 00:11:25.667 "max_cntlid": 65519, 00:11:25.667 "namespaces": [ 00:11:25.667 { 00:11:25.667 "nsid": 1, 00:11:25.667 "bdev_name": "Malloc2", 00:11:25.667 "name": "Malloc2", 00:11:25.667 "nguid": "AC0C8E48600648279F55B05B21823C6E", 00:11:25.667 "uuid": "ac0c8e48-6006-4827-9f55-b05b21823c6e" 00:11:25.667 } 00:11:25.667 ] 00:11:25.667 } 00:11:25.667 ] 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2975048 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:25.667 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:25.925 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.925 [2024-07-15 11:37:33.785214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.925 Malloc4 00:11:25.925 11:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:26.183 [2024-07-15 11:37:34.138869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:26.183 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:26.441 Asynchronous Event Request test 00:11:26.441 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:26.441 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:26.441 Registering asynchronous event callbacks... 00:11:26.441 Starting namespace attribute notice tests for all controllers... 00:11:26.441 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:26.441 aer_cb - Changed Namespace 00:11:26.441 Cleaning up... 00:11:26.700 [ 00:11:26.700 { 00:11:26.700 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.700 "subtype": "Discovery", 00:11:26.700 "listen_addresses": [], 00:11:26.700 "allow_any_host": true, 00:11:26.700 "hosts": [] 00:11:26.700 }, 00:11:26.700 { 00:11:26.700 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:26.700 "subtype": "NVMe", 00:11:26.700 "listen_addresses": [ 00:11:26.700 { 00:11:26.700 "trtype": "VFIOUSER", 00:11:26.700 "adrfam": "IPv4", 00:11:26.700 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:26.700 "trsvcid": "0" 00:11:26.700 } 00:11:26.700 ], 00:11:26.700 "allow_any_host": true, 00:11:26.700 "hosts": [], 00:11:26.700 "serial_number": "SPDK1", 00:11:26.700 "model_number": "SPDK bdev Controller", 00:11:26.700 "max_namespaces": 32, 00:11:26.700 "min_cntlid": 1, 00:11:26.700 "max_cntlid": 65519, 00:11:26.700 "namespaces": [ 00:11:26.700 { 00:11:26.700 "nsid": 1, 00:11:26.700 "bdev_name": "Malloc1", 00:11:26.700 "name": "Malloc1", 00:11:26.700 "nguid": "40486BA397804021B1B048BCCA7DBD8A", 00:11:26.700 "uuid": "40486ba3-9780-4021-b1b0-48bcca7dbd8a" 00:11:26.700 }, 00:11:26.700 { 00:11:26.700 "nsid": 2, 00:11:26.700 "bdev_name": "Malloc3", 00:11:26.700 "name": "Malloc3", 00:11:26.700 "nguid": "A532E5A679B1451690D653DAC7869AA6", 00:11:26.700 "uuid": "a532e5a6-79b1-4516-90d6-53dac7869aa6" 00:11:26.700 } 00:11:26.700 ] 00:11:26.700 }, 00:11:26.700 { 00:11:26.700 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:26.700 "subtype": "NVMe", 00:11:26.700 "listen_addresses": [ 00:11:26.700 { 00:11:26.700 "trtype": "VFIOUSER", 00:11:26.700 "adrfam": "IPv4", 00:11:26.700 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:26.700 "trsvcid": "0" 00:11:26.700 } 00:11:26.700 ], 00:11:26.700 "allow_any_host": true, 00:11:26.700 "hosts": [], 00:11:26.700 "serial_number": "SPDK2", 00:11:26.700 "model_number": "SPDK bdev Controller", 00:11:26.700 "max_namespaces": 32, 00:11:26.700 "min_cntlid": 1, 00:11:26.700 "max_cntlid": 65519, 00:11:26.700 "namespaces": [ 00:11:26.700 { 00:11:26.700 "nsid": 1, 00:11:26.700 "bdev_name": "Malloc2", 00:11:26.700 "name": "Malloc2", 00:11:26.700 "nguid": "AC0C8E48600648279F55B05B21823C6E", 00:11:26.700 "uuid": "ac0c8e48-6006-4827-9f55-b05b21823c6e" 00:11:26.700 }, 00:11:26.700 { 00:11:26.700 "nsid": 2, 00:11:26.700 "bdev_name": "Malloc4", 00:11:26.700 "name": "Malloc4", 00:11:26.700 "nguid": "478015928F7140B5A9319FDE23981906", 00:11:26.700 "uuid": "47801592-8f71-40b5-a931-9fde23981906" 00:11:26.700 } 00:11:26.700 ] 00:11:26.700 } 00:11:26.700 ] 00:11:26.700 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2975048 00:11:26.700 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2969438 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2969438 ']' 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2969438 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2969438 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2969438' 00:11:26.701 killing process with pid 2969438 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2969438 00:11:26.701 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2969438 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2975192 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2975192' 00:11:26.959 Process pid: 2975192 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2975192 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2975192 ']' 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.959 11:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:26.959 [2024-07-15 11:37:34.890105] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:26.959 [2024-07-15 11:37:34.891125] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:26.959 [2024-07-15 11:37:34.891190] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.959 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.215 [2024-07-15 11:37:34.949223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.215 [2024-07-15 11:37:35.052960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.215 [2024-07-15 11:37:35.053023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.216 [2024-07-15 11:37:35.053039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.216 [2024-07-15 11:37:35.053049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.216 [2024-07-15 11:37:35.053059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.216 [2024-07-15 11:37:35.053183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.216 [2024-07-15 11:37:35.053249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.216 [2024-07-15 11:37:35.053316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.216 [2024-07-15 11:37:35.053318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.216 [2024-07-15 11:37:35.151199] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:27.216 [2024-07-15 11:37:35.151326] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:27.216 [2024-07-15 11:37:35.151620] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:27.216 [2024-07-15 11:37:35.152143] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:27.216 [2024-07-15 11:37:35.152382] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:27.216 11:37:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.216 11:37:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:27.216 11:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:28.591 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:28.848 Malloc1 00:11:28.848 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:29.106 11:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:29.364 11:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:29.621 11:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:29.621 11:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:29.621 11:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:29.879 Malloc2 00:11:29.879 11:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:30.135 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:30.391 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2975192 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2975192 ']' 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2975192 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2975192 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2975192' 00:11:30.650 killing process with pid 2975192 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2975192 00:11:30.650 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2975192 00:11:30.908 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:30.908 11:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:30.908 00:11:30.908 real 0m52.615s 00:11:30.908 user 3m27.787s 00:11:30.908 sys 0m4.211s 00:11:30.908 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.908 11:37:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:30.908 ************************************ 00:11:30.908 END TEST nvmf_vfio_user 00:11:30.908 ************************************ 00:11:31.167 11:37:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:31.167 11:37:38 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:31.167 11:37:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:31.167 11:37:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.167 11:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.167 ************************************ 00:11:31.167 START TEST nvmf_vfio_user_nvme_compliance 00:11:31.167 ************************************ 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:31.167 * Looking for test storage... 00:11:31.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2975672 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2975672' 00:11:31.167 Process pid: 2975672 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2975672 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2975672 ']' 00:11:31.167 11:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.167 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.167 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.167 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.167 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:31.167 [2024-07-15 11:37:39.044549] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:31.167 [2024-07-15 11:37:39.044629] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.167 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.167 [2024-07-15 11:37:39.107869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.432 [2024-07-15 11:37:39.218530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.432 [2024-07-15 11:37:39.218593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.432 [2024-07-15 11:37:39.218606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.432 [2024-07-15 11:37:39.218617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.432 [2024-07-15 11:37:39.218626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.432 [2024-07-15 11:37:39.218717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.432 [2024-07-15 11:37:39.218777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.432 [2024-07-15 11:37:39.218782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.432 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.432 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:31.432 11:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.407 malloc0 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.407 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.666 11:37:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:32.666 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.666 00:11:32.666 00:11:32.666 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.666 http://cunit.sourceforge.net/ 00:11:32.666 00:11:32.666 00:11:32.666 Suite: nvme_compliance 00:11:32.666 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 11:37:40.550504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.666 [2024-07-15 11:37:40.552005] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:32.666 [2024-07-15 11:37:40.552047] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:32.666 [2024-07-15 11:37:40.552060] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:32.666 [2024-07-15 11:37:40.553538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.666 passed 00:11:32.666 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 11:37:40.638136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.666 [2024-07-15 11:37:40.641158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.924 passed 00:11:32.924 Test: admin_identify_ns ...[2024-07-15 11:37:40.725227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.925 [2024-07-15 11:37:40.788771] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:32.925 [2024-07-15 11:37:40.796751] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:32.925 [2024-07-15 11:37:40.817881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.925 passed 00:11:32.925 Test: admin_get_features_mandatory_features ...[2024-07-15 11:37:40.898685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.925 [2024-07-15 11:37:40.903715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.182 passed 00:11:33.182 Test: admin_get_features_optional_features ...[2024-07-15 11:37:40.989288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.182 [2024-07-15 11:37:40.992310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.182 passed 00:11:33.182 Test: admin_set_features_number_of_queues ...[2024-07-15 11:37:41.072177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.440 [2024-07-15 11:37:41.180853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.440 passed 00:11:33.440 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 11:37:41.263400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.440 [2024-07-15 11:37:41.266423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.441 passed 00:11:33.441 Test: admin_get_log_page_with_lpo ...[2024-07-15 11:37:41.346138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.441 [2024-07-15 11:37:41.417784] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:33.698 [2024-07-15 11:37:41.433850] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.698 passed 00:11:33.698 Test: fabric_property_get ...[2024-07-15 11:37:41.516100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.698 [2024-07-15 11:37:41.517385] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:33.698 [2024-07-15 11:37:41.519141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.698 passed 00:11:33.698 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 11:37:41.604670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.698 [2024-07-15 11:37:41.605998] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:33.698 [2024-07-15 11:37:41.607692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.698 passed 00:11:33.956 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 11:37:41.690779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.956 [2024-07-15 11:37:41.776763] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.956 [2024-07-15 11:37:41.792747] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.956 [2024-07-15 11:37:41.797866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.956 passed 00:11:33.956 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 11:37:41.881457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.956 [2024-07-15 11:37:41.882716] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:33.956 [2024-07-15 11:37:41.884476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.956 passed 00:11:34.215 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 11:37:41.966486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.215 [2024-07-15 11:37:42.041776] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:34.215 [2024-07-15 11:37:42.065767] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:34.215 [2024-07-15 11:37:42.070858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.215 passed 00:11:34.215 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 11:37:42.154467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.215 [2024-07-15 11:37:42.155817] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:34.215 [2024-07-15 11:37:42.155876] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:34.215 [2024-07-15 11:37:42.157488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.215 passed 00:11:34.474 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 11:37:42.238703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.474 [2024-07-15 11:37:42.332763] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:34.474 [2024-07-15 11:37:42.340774] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:34.474 [2024-07-15 11:37:42.348745] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:34.474 [2024-07-15 11:37:42.356745] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:34.474 [2024-07-15 11:37:42.385875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.474 passed 00:11:34.733 Test: admin_create_io_sq_verify_pc ...[2024-07-15 11:37:42.469430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.733 [2024-07-15 11:37:42.485762] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:34.733 [2024-07-15 11:37:42.502857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.733 passed 00:11:34.733 Test: admin_create_io_qp_max_qps ...[2024-07-15 11:37:42.584360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:36.108 [2024-07-15 11:37:43.695767] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:36.108 [2024-07-15 11:37:44.066920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:36.367 passed 00:11:36.367 Test: admin_create_io_sq_shared_cq ...[2024-07-15 11:37:44.151226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:36.367 [2024-07-15 11:37:44.282764] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:36.367 [2024-07-15 11:37:44.319847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:36.367 passed 00:11:36.367 00:11:36.367 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.367 suites 1 1 n/a 0 0 00:11:36.367 tests 18 18 18 0 0 00:11:36.367 asserts 360 360 360 0 n/a 00:11:36.367 00:11:36.367 Elapsed time = 1.563 seconds 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2975672 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2975672 ']' 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2975672 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2975672 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2975672' 00:11:36.626 killing process with pid 2975672 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2975672 00:11:36.626 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2975672 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:36.884 00:11:36.884 real 0m5.770s 00:11:36.884 user 0m16.187s 00:11:36.884 sys 0m0.556s 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:36.884 ************************************ 00:11:36.884 END TEST nvmf_vfio_user_nvme_compliance 00:11:36.884 ************************************ 00:11:36.884 11:37:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.884 11:37:44 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.884 11:37:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.884 11:37:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.884 11:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.884 ************************************ 00:11:36.884 START TEST nvmf_vfio_user_fuzz 00:11:36.884 ************************************ 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.884 * Looking for test storage... 00:11:36.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.884 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2976515 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2976515' 00:11:36.885 Process pid: 2976515 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2976515 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2976515 ']' 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.885 11:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:37.452 11:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.452 11:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:37.452 11:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.387 malloc0 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:38.387 11:37:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:10.450 Fuzzing completed. Shutting down the fuzz application 00:12:10.450 00:12:10.450 Dumping successful admin opcodes: 00:12:10.450 8, 9, 10, 24, 00:12:10.450 Dumping successful io opcodes: 00:12:10.450 0, 00:12:10.450 NS: 0x200003a1ef00 I/O qp, Total commands completed: 645091, total successful commands: 2504, random_seed: 3151981824 00:12:10.450 NS: 0x200003a1ef00 admin qp, Total commands completed: 83180, total successful commands: 665, random_seed: 3213148608 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2976515 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2976515 ']' 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2976515 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2976515 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2976515' 00:12:10.450 killing process with pid 2976515 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2976515 00:12:10.450 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2976515 00:12:10.451 11:38:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:10.451 11:38:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:10.451 00:12:10.451 real 0m33.272s 00:12:10.451 user 0m32.278s 00:12:10.451 sys 0m28.712s 00:12:10.451 11:38:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.451 11:38:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.451 ************************************ 00:12:10.451 END TEST nvmf_vfio_user_fuzz 00:12:10.451 ************************************ 00:12:10.451 11:38:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:10.451 11:38:18 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.451 11:38:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.451 11:38:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.451 11:38:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.451 ************************************ 00:12:10.451 START TEST nvmf_host_management 00:12:10.451 ************************************ 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.451 * Looking for test storage... 00:12:10.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.451 11:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:12.353 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:12.353 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:12.353 Found net devices under 0000:84:00.0: cvl_0_0 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:12.353 Found net devices under 0000:84:00.1: cvl_0_1 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:12.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:12:12.353 00:12:12.353 --- 10.0.0.2 ping statistics --- 00:12:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.353 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:12.353 00:12:12.353 --- 10.0.0.1 ping statistics --- 00:12:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.353 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:12.353 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2982028 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2982028 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2982028 ']' 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.613 11:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 [2024-07-15 11:38:20.418059] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:12.613 [2024-07-15 11:38:20.418136] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.613 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.613 [2024-07-15 11:38:20.486173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.613 [2024-07-15 11:38:20.598054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.613 [2024-07-15 11:38:20.598115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.613 [2024-07-15 11:38:20.598144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.613 [2024-07-15 11:38:20.598155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.613 [2024-07-15 11:38:20.598165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.613 [2024-07-15 11:38:20.598251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.613 [2024-07-15 11:38:20.598296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.613 [2024-07-15 11:38:20.598345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:12.613 [2024-07-15 11:38:20.598347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 [2024-07-15 11:38:21.366751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 Malloc0 00:12:13.545 [2024-07-15 11:38:21.427760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2982196 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2982196 /var/tmp/bdevperf.sock 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2982196 ']' 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:13.545 { 00:12:13.545 "params": { 00:12:13.545 "name": "Nvme$subsystem", 00:12:13.545 "trtype": "$TEST_TRANSPORT", 00:12:13.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:13.545 "adrfam": "ipv4", 00:12:13.545 "trsvcid": "$NVMF_PORT", 00:12:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:13.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:13.545 "hdgst": ${hdgst:-false}, 00:12:13.545 "ddgst": ${ddgst:-false} 00:12:13.545 }, 00:12:13.545 "method": "bdev_nvme_attach_controller" 00:12:13.545 } 00:12:13.545 EOF 00:12:13.545 )") 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:13.545 11:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:13.545 "params": { 00:12:13.545 "name": "Nvme0", 00:12:13.545 "trtype": "tcp", 00:12:13.545 "traddr": "10.0.0.2", 00:12:13.545 "adrfam": "ipv4", 00:12:13.545 "trsvcid": "4420", 00:12:13.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:13.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:13.545 "hdgst": false, 00:12:13.545 "ddgst": false 00:12:13.545 }, 00:12:13.545 "method": "bdev_nvme_attach_controller" 00:12:13.545 }' 00:12:13.545 [2024-07-15 11:38:21.507962] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:13.546 [2024-07-15 11:38:21.508064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982196 ] 00:12:13.805 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.805 [2024-07-15 11:38:21.574810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.805 [2024-07-15 11:38:21.686085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.064 Running I/O for 10 seconds... 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:14.064 11:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.323 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.323 [2024-07-15 11:38:22.266669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.323 [2024-07-15 11:38:22.266935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.323 [2024-07-15 11:38:22.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.266967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.266984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.266998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.267975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.267991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.324 [2024-07-15 11:38:22.268199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.324 [2024-07-15 11:38:22.268213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:14.325 [2024-07-15 11:38:22.268652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.325 [2024-07-15 11:38:22.268754] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cc4a10 was disconnected and freed. reset controller. 00:12:14.325 [2024-07-15 11:38:22.269907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 task offset: 77184 on job bdev=Nvme0n1 fails 00:12:14.325 00:12:14.325 Latency(us) 00:12:14.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.325 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:14.325 Job: Nvme0n1 ended in about 0.40 seconds with error 00:12:14.325 Verification LBA range: start 0x0 length 0x400 00:12:14.325 Nvme0n1 : 0.40 1455.07 90.94 161.67 0.00 38454.83 2597.17 35535.08 00:12:14.325 =================================================================================================================== 00:12:14.325 Total : 1455.07 90.94 161.67 0.00 38454.83 2597.17 35535.08 00:12:14.325 [2024-07-15 11:38:22.271819] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:14.325 [2024-07-15 11:38:22.271848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1893540 (9): Bad file descriptor 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.325 11:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:14.325 [2024-07-15 11:38:22.282785] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2982196 00:12:15.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2982196) - No such process 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.699 { 00:12:15.699 "params": { 00:12:15.699 "name": "Nvme$subsystem", 00:12:15.699 "trtype": "$TEST_TRANSPORT", 00:12:15.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.699 "adrfam": "ipv4", 00:12:15.699 "trsvcid": "$NVMF_PORT", 00:12:15.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.699 "hdgst": ${hdgst:-false}, 00:12:15.699 "ddgst": ${ddgst:-false} 00:12:15.699 }, 00:12:15.699 "method": "bdev_nvme_attach_controller" 00:12:15.699 } 00:12:15.699 EOF 00:12:15.699 )") 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:15.699 11:38:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.699 "params": { 00:12:15.699 "name": "Nvme0", 00:12:15.699 "trtype": "tcp", 00:12:15.699 "traddr": "10.0.0.2", 00:12:15.699 "adrfam": "ipv4", 00:12:15.699 "trsvcid": "4420", 00:12:15.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:15.699 "hdgst": false, 00:12:15.699 "ddgst": false 00:12:15.699 }, 00:12:15.699 "method": "bdev_nvme_attach_controller" 00:12:15.699 }' 00:12:15.699 [2024-07-15 11:38:23.327937] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:15.699 [2024-07-15 11:38:23.328037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982468 ] 00:12:15.699 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.699 [2024-07-15 11:38:23.388100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.699 [2024-07-15 11:38:23.501482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.960 Running I/O for 1 seconds... 00:12:16.898 00:12:16.898 Latency(us) 00:12:16.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:16.898 Verification LBA range: start 0x0 length 0x400 00:12:16.898 Nvme0n1 : 1.01 1539.06 96.19 0.00 0.00 40729.52 2475.80 33787.45 00:12:16.898 =================================================================================================================== 00:12:16.898 Total : 1539.06 96.19 0.00 0.00 40729.52 2475.80 33787.45 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.155 11:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.155 rmmod nvme_tcp 00:12:17.155 rmmod nvme_fabrics 00:12:17.155 rmmod nvme_keyring 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2982028 ']' 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2982028 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2982028 ']' 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2982028 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2982028 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2982028' 00:12:17.155 killing process with pid 2982028 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2982028 00:12:17.155 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2982028 00:12:17.413 [2024-07-15 11:38:25.313283] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.413 11:38:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.949 11:38:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.949 11:38:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:19.949 00:12:19.949 real 0m9.319s 00:12:19.949 user 0m21.925s 00:12:19.949 sys 0m2.818s 00:12:19.949 11:38:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.949 11:38:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.949 ************************************ 00:12:19.949 END TEST nvmf_host_management 00:12:19.949 ************************************ 00:12:19.949 11:38:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.949 11:38:27 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:19.949 11:38:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.949 11:38:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.949 11:38:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.949 ************************************ 00:12:19.949 START TEST nvmf_lvol 00:12:19.949 ************************************ 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:19.949 * Looking for test storage... 00:12:19.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.949 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.950 11:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:21.871 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.871 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.871 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:21.872 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:21.872 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:21.872 Found net devices under 0000:84:00.0: cvl_0_0 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:21.872 Found net devices under 0000:84:00.1: cvl_0_1 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:12:21.872 00:12:21.872 --- 10.0.0.2 ping statistics --- 00:12:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.872 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:21.872 00:12:21.872 --- 10.0.0.1 ping statistics --- 00:12:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.872 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2984692 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2984692 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2984692 ']' 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.872 11:38:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 [2024-07-15 11:38:29.752177] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:21.872 [2024-07-15 11:38:29.752255] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.872 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.872 [2024-07-15 11:38:29.826577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.133 [2024-07-15 11:38:29.939389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.133 [2024-07-15 11:38:29.939455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.133 [2024-07-15 11:38:29.939468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.133 [2024-07-15 11:38:29.939479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.133 [2024-07-15 11:38:29.939488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.133 [2024-07-15 11:38:29.939547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.133 [2024-07-15 11:38:29.939600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.133 [2024-07-15 11:38:29.939603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:23.066 [2024-07-15 11:38:30.953652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.066 11:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.323 11:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:23.323 11:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.579 11:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:23.579 11:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:23.836 11:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:24.094 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=843dc0f3-ee58-4cd2-96b9-19ae6beb2d8d 00:12:24.094 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 843dc0f3-ee58-4cd2-96b9-19ae6beb2d8d lvol 20 00:12:24.397 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d86c2dad-09d3-4772-8220-dea6f0007763 00:12:24.397 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.654 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d86c2dad-09d3-4772-8220-dea6f0007763 00:12:24.911 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:25.168 [2024-07-15 11:38:32.971173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.168 11:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.425 11:38:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2985123 00:12:25.425 11:38:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:25.425 11:38:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:25.425 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.361 11:38:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d86c2dad-09d3-4772-8220-dea6f0007763 MY_SNAPSHOT 00:12:26.619 11:38:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5967f5f2-4fd1-4849-9b53-2dbc0eefd006 00:12:26.619 11:38:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d86c2dad-09d3-4772-8220-dea6f0007763 30 00:12:27.184 11:38:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5967f5f2-4fd1-4849-9b53-2dbc0eefd006 MY_CLONE 00:12:27.442 11:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d46f3f2d-df90-42ff-8f7f-800834e2b04c 00:12:27.442 11:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d46f3f2d-df90-42ff-8f7f-800834e2b04c 00:12:28.009 11:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2985123 00:12:36.133 Initializing NVMe Controllers 00:12:36.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:36.133 Controller IO queue size 128, less than required. 00:12:36.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:36.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:36.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:36.133 Initialization complete. Launching workers. 00:12:36.133 ======================================================== 00:12:36.133 Latency(us) 00:12:36.133 Device Information : IOPS MiB/s Average min max 00:12:36.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10566.30 41.27 12120.43 2308.52 82320.55 00:12:36.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10411.50 40.67 12296.40 2276.82 72696.44 00:12:36.133 ======================================================== 00:12:36.133 Total : 20977.80 81.94 12207.76 2276.82 82320.55 00:12:36.133 00:12:36.133 11:38:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.133 11:38:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d86c2dad-09d3-4772-8220-dea6f0007763 00:12:36.391 11:38:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 843dc0f3-ee58-4cd2-96b9-19ae6beb2d8d 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.649 rmmod nvme_tcp 00:12:36.649 rmmod nvme_fabrics 00:12:36.649 rmmod nvme_keyring 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2984692 ']' 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2984692 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2984692 ']' 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2984692 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2984692 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2984692' 00:12:36.649 killing process with pid 2984692 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2984692 00:12:36.649 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2984692 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.908 11:38:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.511 11:38:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.511 00:12:39.511 real 0m19.464s 00:12:39.511 user 1m6.351s 00:12:39.511 sys 0m5.758s 00:12:39.511 11:38:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.511 11:38:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:39.511 ************************************ 00:12:39.511 END TEST nvmf_lvol 00:12:39.511 ************************************ 00:12:39.511 11:38:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.511 11:38:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:39.511 11:38:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.511 11:38:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.511 11:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.511 ************************************ 00:12:39.511 START TEST nvmf_lvs_grow 00:12:39.511 ************************************ 00:12:39.511 11:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:39.511 * Looking for test storage... 00:12:39.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.511 11:38:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:41.430 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:41.430 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:41.430 Found net devices under 0000:84:00.0: cvl_0_0 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:41.430 Found net devices under 0000:84:00.1: cvl_0_1 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.430 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:12:41.431 00:12:41.431 --- 10.0.0.2 ping statistics --- 00:12:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.431 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:12:41.431 00:12:41.431 --- 10.0.0.1 ping statistics --- 00:12:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.431 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2988402 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2988402 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2988402 ']' 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.431 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:41.431 [2024-07-15 11:38:49.307843] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:41.431 [2024-07-15 11:38:49.307946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.431 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.431 [2024-07-15 11:38:49.373792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.689 [2024-07-15 11:38:49.484871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.689 [2024-07-15 11:38:49.484934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.689 [2024-07-15 11:38:49.484961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.689 [2024-07-15 11:38:49.484972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.689 [2024-07-15 11:38:49.484982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.689 [2024-07-15 11:38:49.485016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.689 11:38:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:41.947 [2024-07-15 11:38:49.902762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.947 11:38:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:41.947 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:41.947 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.947 11:38:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.206 ************************************ 00:12:42.206 START TEST lvs_grow_clean 00:12:42.206 ************************************ 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.206 11:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:42.479 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:42.479 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:42.738 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=48767121-274b-44b6-b26e-bfea929c37a6 00:12:42.738 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:42.738 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:42.738 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:42.739 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:42.739 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 48767121-274b-44b6-b26e-bfea929c37a6 lvol 150 00:12:42.997 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=84bd321b-799e-4626-af75-c59f978d558d 00:12:42.997 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.997 11:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:43.256 [2024-07-15 11:38:51.204911] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:43.256 [2024-07-15 11:38:51.205013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:43.256 true 00:12:43.256 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:43.256 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:43.513 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:43.513 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:43.771 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84bd321b-799e-4626-af75-c59f978d558d 00:12:44.030 11:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:44.290 [2024-07-15 11:38:52.183914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.290 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2988836 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2988836 /var/tmp/bdevperf.sock 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2988836 ']' 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.549 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:44.549 [2024-07-15 11:38:52.482298] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:44.549 [2024-07-15 11:38:52.482385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988836 ] 00:12:44.549 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.806 [2024-07-15 11:38:52.539933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.806 [2024-07-15 11:38:52.645730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.806 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.806 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:44.806 11:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:45.374 Nvme0n1 00:12:45.374 11:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:45.634 [ 00:12:45.634 { 00:12:45.634 "name": "Nvme0n1", 00:12:45.634 "aliases": [ 00:12:45.634 "84bd321b-799e-4626-af75-c59f978d558d" 00:12:45.634 ], 00:12:45.634 "product_name": "NVMe disk", 00:12:45.634 "block_size": 4096, 00:12:45.634 "num_blocks": 38912, 00:12:45.634 "uuid": "84bd321b-799e-4626-af75-c59f978d558d", 00:12:45.634 "assigned_rate_limits": { 00:12:45.634 "rw_ios_per_sec": 0, 00:12:45.634 "rw_mbytes_per_sec": 0, 00:12:45.634 "r_mbytes_per_sec": 0, 00:12:45.634 "w_mbytes_per_sec": 0 00:12:45.634 }, 00:12:45.634 "claimed": false, 00:12:45.634 "zoned": false, 00:12:45.634 "supported_io_types": { 00:12:45.634 "read": true, 00:12:45.634 "write": true, 00:12:45.634 "unmap": true, 00:12:45.634 "flush": true, 00:12:45.634 "reset": true, 00:12:45.634 "nvme_admin": true, 00:12:45.634 "nvme_io": true, 00:12:45.634 "nvme_io_md": false, 00:12:45.634 "write_zeroes": true, 00:12:45.634 "zcopy": false, 00:12:45.634 "get_zone_info": false, 00:12:45.634 "zone_management": false, 00:12:45.634 "zone_append": false, 00:12:45.634 "compare": true, 00:12:45.634 "compare_and_write": true, 00:12:45.634 "abort": true, 00:12:45.634 "seek_hole": false, 00:12:45.634 "seek_data": false, 00:12:45.634 "copy": true, 00:12:45.634 "nvme_iov_md": false 00:12:45.634 }, 00:12:45.634 "memory_domains": [ 00:12:45.634 { 00:12:45.634 "dma_device_id": "system", 00:12:45.634 "dma_device_type": 1 00:12:45.634 } 00:12:45.634 ], 00:12:45.634 "driver_specific": { 00:12:45.634 "nvme": [ 00:12:45.634 { 00:12:45.634 "trid": { 00:12:45.634 "trtype": "TCP", 00:12:45.634 "adrfam": "IPv4", 00:12:45.634 "traddr": "10.0.0.2", 00:12:45.634 "trsvcid": "4420", 00:12:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:45.634 }, 00:12:45.634 "ctrlr_data": { 00:12:45.634 "cntlid": 1, 00:12:45.634 "vendor_id": "0x8086", 00:12:45.634 "model_number": "SPDK bdev Controller", 00:12:45.634 "serial_number": "SPDK0", 00:12:45.634 "firmware_revision": "24.09", 00:12:45.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:45.634 "oacs": { 00:12:45.634 "security": 0, 00:12:45.634 "format": 0, 00:12:45.634 "firmware": 0, 00:12:45.634 "ns_manage": 0 00:12:45.634 }, 00:12:45.634 "multi_ctrlr": true, 00:12:45.634 "ana_reporting": false 00:12:45.634 }, 00:12:45.634 "vs": { 00:12:45.634 "nvme_version": "1.3" 00:12:45.634 }, 00:12:45.634 "ns_data": { 00:12:45.634 "id": 1, 00:12:45.634 "can_share": true 00:12:45.634 } 00:12:45.634 } 00:12:45.634 ], 00:12:45.634 "mp_policy": "active_passive" 00:12:45.634 } 00:12:45.634 } 00:12:45.634 ] 00:12:45.634 11:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2988972 00:12:45.634 11:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:45.634 11:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:45.634 Running I/O for 10 seconds... 00:12:47.016 Latency(us) 00:12:47.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.016 Nvme0n1 : 1.00 16654.00 65.05 0.00 0.00 0.00 0.00 0.00 00:12:47.016 =================================================================================================================== 00:12:47.016 Total : 16654.00 65.05 0.00 0.00 0.00 0.00 0.00 00:12:47.016 00:12:47.585 11:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:47.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.585 Nvme0n1 : 2.00 16905.50 66.04 0.00 0.00 0.00 0.00 0.00 00:12:47.585 =================================================================================================================== 00:12:47.585 Total : 16905.50 66.04 0.00 0.00 0.00 0.00 0.00 00:12:47.585 00:12:47.842 true 00:12:47.842 11:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:47.842 11:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:48.101 11:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:48.101 11:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:48.101 11:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2988972 00:12:48.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.668 Nvme0n1 : 3.00 16964.33 66.27 0.00 0.00 0.00 0.00 0.00 00:12:48.668 =================================================================================================================== 00:12:48.668 Total : 16964.33 66.27 0.00 0.00 0.00 0.00 0.00 00:12:48.668 00:12:49.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.604 Nvme0n1 : 4.00 17060.00 66.64 0.00 0.00 0.00 0.00 0.00 00:12:49.604 =================================================================================================================== 00:12:49.604 Total : 17060.00 66.64 0.00 0.00 0.00 0.00 0.00 00:12:49.604 00:12:50.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.978 Nvme0n1 : 5.00 17195.00 67.17 0.00 0.00 0.00 0.00 0.00 00:12:50.978 =================================================================================================================== 00:12:50.978 Total : 17195.00 67.17 0.00 0.00 0.00 0.00 0.00 00:12:50.978 00:12:51.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.916 Nvme0n1 : 6.00 17244.33 67.36 0.00 0.00 0.00 0.00 0.00 00:12:51.916 =================================================================================================================== 00:12:51.916 Total : 17244.33 67.36 0.00 0.00 0.00 0.00 0.00 00:12:51.916 00:12:52.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.853 Nvme0n1 : 7.00 17252.86 67.39 0.00 0.00 0.00 0.00 0.00 00:12:52.853 =================================================================================================================== 00:12:52.853 Total : 17252.86 67.39 0.00 0.00 0.00 0.00 0.00 00:12:52.853 00:12:53.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.789 Nvme0n1 : 8.00 17318.75 67.65 0.00 0.00 0.00 0.00 0.00 00:12:53.789 =================================================================================================================== 00:12:53.789 Total : 17318.75 67.65 0.00 0.00 0.00 0.00 0.00 00:12:53.789 00:12:54.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.721 Nvme0n1 : 9.00 17357.89 67.80 0.00 0.00 0.00 0.00 0.00 00:12:54.721 =================================================================================================================== 00:12:54.721 Total : 17357.89 67.80 0.00 0.00 0.00 0.00 0.00 00:12:54.721 00:12:55.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.653 Nvme0n1 : 10.00 17400.10 67.97 0.00 0.00 0.00 0.00 0.00 00:12:55.653 =================================================================================================================== 00:12:55.653 Total : 17400.10 67.97 0.00 0.00 0.00 0.00 0.00 00:12:55.653 00:12:55.653 00:12:55.653 Latency(us) 00:12:55.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.653 Nvme0n1 : 10.00 17405.65 67.99 0.00 0.00 7350.16 1978.22 15631.55 00:12:55.653 =================================================================================================================== 00:12:55.653 Total : 17405.65 67.99 0.00 0.00 7350.16 1978.22 15631.55 00:12:55.653 0 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2988836 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2988836 ']' 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2988836 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.653 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2988836 00:12:55.912 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:55.912 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:55.912 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2988836' 00:12:55.912 killing process with pid 2988836 00:12:55.912 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2988836 00:12:55.912 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.912 00:12:55.912 Latency(us) 00:12:55.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.912 =================================================================================================================== 00:12:55.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.912 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2988836 00:12:56.171 11:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.429 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:56.687 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:56.687 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:56.945 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:56.945 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:56.945 11:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:57.204 [2024-07-15 11:39:04.986453] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:57.204 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:57.204 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:57.204 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:57.204 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.204 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:57.205 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:57.464 request: 00:12:57.464 { 00:12:57.464 "uuid": "48767121-274b-44b6-b26e-bfea929c37a6", 00:12:57.464 "method": "bdev_lvol_get_lvstores", 00:12:57.464 "req_id": 1 00:12:57.464 } 00:12:57.464 Got JSON-RPC error response 00:12:57.464 response: 00:12:57.464 { 00:12:57.464 "code": -19, 00:12:57.464 "message": "No such device" 00:12:57.464 } 00:12:57.464 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:57.464 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:57.464 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:57.464 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:57.464 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:57.724 aio_bdev 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84bd321b-799e-4626-af75-c59f978d558d 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=84bd321b-799e-4626-af75-c59f978d558d 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:57.724 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:57.981 11:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 84bd321b-799e-4626-af75-c59f978d558d -t 2000 00:12:58.241 [ 00:12:58.241 { 00:12:58.241 "name": "84bd321b-799e-4626-af75-c59f978d558d", 00:12:58.241 "aliases": [ 00:12:58.241 "lvs/lvol" 00:12:58.241 ], 00:12:58.241 "product_name": "Logical Volume", 00:12:58.241 "block_size": 4096, 00:12:58.241 "num_blocks": 38912, 00:12:58.241 "uuid": "84bd321b-799e-4626-af75-c59f978d558d", 00:12:58.241 "assigned_rate_limits": { 00:12:58.241 "rw_ios_per_sec": 0, 00:12:58.241 "rw_mbytes_per_sec": 0, 00:12:58.241 "r_mbytes_per_sec": 0, 00:12:58.241 "w_mbytes_per_sec": 0 00:12:58.241 }, 00:12:58.241 "claimed": false, 00:12:58.241 "zoned": false, 00:12:58.241 "supported_io_types": { 00:12:58.241 "read": true, 00:12:58.241 "write": true, 00:12:58.241 "unmap": true, 00:12:58.241 "flush": false, 00:12:58.241 "reset": true, 00:12:58.241 "nvme_admin": false, 00:12:58.241 "nvme_io": false, 00:12:58.241 "nvme_io_md": false, 00:12:58.241 "write_zeroes": true, 00:12:58.241 "zcopy": false, 00:12:58.241 "get_zone_info": false, 00:12:58.241 "zone_management": false, 00:12:58.241 "zone_append": false, 00:12:58.241 "compare": false, 00:12:58.241 "compare_and_write": false, 00:12:58.241 "abort": false, 00:12:58.241 "seek_hole": true, 00:12:58.241 "seek_data": true, 00:12:58.241 "copy": false, 00:12:58.241 "nvme_iov_md": false 00:12:58.241 }, 00:12:58.241 "driver_specific": { 00:12:58.241 "lvol": { 00:12:58.241 "lvol_store_uuid": "48767121-274b-44b6-b26e-bfea929c37a6", 00:12:58.241 "base_bdev": "aio_bdev", 00:12:58.241 "thin_provision": false, 00:12:58.241 "num_allocated_clusters": 38, 00:12:58.241 "snapshot": false, 00:12:58.241 "clone": false, 00:12:58.241 "esnap_clone": false 00:12:58.241 } 00:12:58.241 } 00:12:58.241 } 00:12:58.241 ] 00:12:58.241 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:58.241 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:58.241 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:58.501 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:58.501 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:58.501 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:58.760 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:58.760 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 84bd321b-799e-4626-af75-c59f978d558d 00:12:59.020 11:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48767121-274b-44b6-b26e-bfea929c37a6 00:12:59.281 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.541 00:12:59.541 real 0m17.359s 00:12:59.541 user 0m16.793s 00:12:59.541 sys 0m1.955s 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:59.541 ************************************ 00:12:59.541 END TEST lvs_grow_clean 00:12:59.541 ************************************ 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.541 ************************************ 00:12:59.541 START TEST lvs_grow_dirty 00:12:59.541 ************************************ 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.541 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:59.799 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:59.799 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:00.058 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:00.058 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:00.058 11:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:00.317 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:00.317 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:00.317 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd lvol 150 00:13:00.575 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:00.575 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:00.575 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:00.834 [2024-07-15 11:39:08.668925] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:00.834 [2024-07-15 11:39:08.669015] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:00.834 true 00:13:00.834 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:00.834 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:01.092 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:01.092 11:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:01.350 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:01.609 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:01.869 [2024-07-15 11:39:09.675965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.869 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2991515 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2991515 /var/tmp/bdevperf.sock 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2991515 ']' 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.128 11:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:02.128 [2024-07-15 11:39:09.973544] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:02.128 [2024-07-15 11:39:09.973631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991515 ] 00:13:02.128 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.128 [2024-07-15 11:39:10.036384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.386 [2024-07-15 11:39:10.150684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.386 11:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.386 11:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:02.386 11:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:02.953 Nvme0n1 00:13:02.953 11:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:03.212 [ 00:13:03.212 { 00:13:03.212 "name": "Nvme0n1", 00:13:03.212 "aliases": [ 00:13:03.212 "6aa08425-f68f-4377-bd9f-3f074100bfad" 00:13:03.212 ], 00:13:03.212 "product_name": "NVMe disk", 00:13:03.212 "block_size": 4096, 00:13:03.212 "num_blocks": 38912, 00:13:03.212 "uuid": "6aa08425-f68f-4377-bd9f-3f074100bfad", 00:13:03.212 "assigned_rate_limits": { 00:13:03.212 "rw_ios_per_sec": 0, 00:13:03.212 "rw_mbytes_per_sec": 0, 00:13:03.212 "r_mbytes_per_sec": 0, 00:13:03.212 "w_mbytes_per_sec": 0 00:13:03.212 }, 00:13:03.212 "claimed": false, 00:13:03.212 "zoned": false, 00:13:03.212 "supported_io_types": { 00:13:03.212 "read": true, 00:13:03.212 "write": true, 00:13:03.212 "unmap": true, 00:13:03.212 "flush": true, 00:13:03.212 "reset": true, 00:13:03.212 "nvme_admin": true, 00:13:03.212 "nvme_io": true, 00:13:03.212 "nvme_io_md": false, 00:13:03.212 "write_zeroes": true, 00:13:03.212 "zcopy": false, 00:13:03.212 "get_zone_info": false, 00:13:03.212 "zone_management": false, 00:13:03.212 "zone_append": false, 00:13:03.213 "compare": true, 00:13:03.213 "compare_and_write": true, 00:13:03.213 "abort": true, 00:13:03.213 "seek_hole": false, 00:13:03.213 "seek_data": false, 00:13:03.213 "copy": true, 00:13:03.213 "nvme_iov_md": false 00:13:03.213 }, 00:13:03.213 "memory_domains": [ 00:13:03.213 { 00:13:03.213 "dma_device_id": "system", 00:13:03.213 "dma_device_type": 1 00:13:03.213 } 00:13:03.213 ], 00:13:03.213 "driver_specific": { 00:13:03.213 "nvme": [ 00:13:03.213 { 00:13:03.213 "trid": { 00:13:03.213 "trtype": "TCP", 00:13:03.213 "adrfam": "IPv4", 00:13:03.213 "traddr": "10.0.0.2", 00:13:03.213 "trsvcid": "4420", 00:13:03.213 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:03.213 }, 00:13:03.213 "ctrlr_data": { 00:13:03.213 "cntlid": 1, 00:13:03.213 "vendor_id": "0x8086", 00:13:03.213 "model_number": "SPDK bdev Controller", 00:13:03.213 "serial_number": "SPDK0", 00:13:03.213 "firmware_revision": "24.09", 00:13:03.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:03.213 "oacs": { 00:13:03.213 "security": 0, 00:13:03.213 "format": 0, 00:13:03.213 "firmware": 0, 00:13:03.213 "ns_manage": 0 00:13:03.213 }, 00:13:03.213 "multi_ctrlr": true, 00:13:03.213 "ana_reporting": false 00:13:03.213 }, 00:13:03.213 "vs": { 00:13:03.213 "nvme_version": "1.3" 00:13:03.213 }, 00:13:03.213 "ns_data": { 00:13:03.213 "id": 1, 00:13:03.213 "can_share": true 00:13:03.213 } 00:13:03.213 } 00:13:03.213 ], 00:13:03.213 "mp_policy": "active_passive" 00:13:03.213 } 00:13:03.213 } 00:13:03.213 ] 00:13:03.213 11:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2991653 00:13:03.213 11:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:03.213 11:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:03.213 Running I/O for 10 seconds... 00:13:04.151 Latency(us) 00:13:04.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.151 Nvme0n1 : 1.00 16586.00 64.79 0.00 0.00 0.00 0.00 0.00 00:13:04.151 =================================================================================================================== 00:13:04.151 Total : 16586.00 64.79 0.00 0.00 0.00 0.00 0.00 00:13:04.151 00:13:05.088 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:05.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.359 Nvme0n1 : 2.00 16808.00 65.66 0.00 0.00 0.00 0.00 0.00 00:13:05.359 =================================================================================================================== 00:13:05.359 Total : 16808.00 65.66 0.00 0.00 0.00 0.00 0.00 00:13:05.359 00:13:05.359 true 00:13:05.359 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:05.359 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:05.626 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:05.626 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:05.626 11:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2991653 00:13:06.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.195 Nvme0n1 : 3.00 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:13:06.195 =================================================================================================================== 00:13:06.195 Total : 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:13:06.195 00:13:07.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.130 Nvme0n1 : 4.00 17054.25 66.62 0.00 0.00 0.00 0.00 0.00 00:13:07.130 =================================================================================================================== 00:13:07.130 Total : 17054.25 66.62 0.00 0.00 0.00 0.00 0.00 00:13:07.130 00:13:08.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.506 Nvme0n1 : 5.00 17179.60 67.11 0.00 0.00 0.00 0.00 0.00 00:13:08.506 =================================================================================================================== 00:13:08.506 Total : 17179.60 67.11 0.00 0.00 0.00 0.00 0.00 00:13:08.506 00:13:09.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.446 Nvme0n1 : 6.00 17242.83 67.35 0.00 0.00 0.00 0.00 0.00 00:13:09.446 =================================================================================================================== 00:13:09.446 Total : 17242.83 67.35 0.00 0.00 0.00 0.00 0.00 00:13:09.446 00:13:10.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.453 Nvme0n1 : 7.00 17252.86 67.39 0.00 0.00 0.00 0.00 0.00 00:13:10.453 =================================================================================================================== 00:13:10.453 Total : 17252.86 67.39 0.00 0.00 0.00 0.00 0.00 00:13:10.453 00:13:11.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.390 Nvme0n1 : 8.00 17276.25 67.49 0.00 0.00 0.00 0.00 0.00 00:13:11.390 =================================================================================================================== 00:13:11.390 Total : 17276.25 67.49 0.00 0.00 0.00 0.00 0.00 00:13:11.390 00:13:12.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.363 Nvme0n1 : 9.00 17304.44 67.60 0.00 0.00 0.00 0.00 0.00 00:13:12.364 =================================================================================================================== 00:13:12.364 Total : 17304.44 67.60 0.00 0.00 0.00 0.00 0.00 00:13:12.364 00:13:13.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.301 Nvme0n1 : 10.00 17325.40 67.68 0.00 0.00 0.00 0.00 0.00 00:13:13.301 =================================================================================================================== 00:13:13.301 Total : 17325.40 67.68 0.00 0.00 0.00 0.00 0.00 00:13:13.301 00:13:13.301 00:13:13.301 Latency(us) 00:13:13.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.301 Nvme0n1 : 10.01 17327.18 67.68 0.00 0.00 7382.65 4344.79 14951.92 00:13:13.301 =================================================================================================================== 00:13:13.301 Total : 17327.18 67.68 0.00 0.00 7382.65 4344.79 14951.92 00:13:13.301 0 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2991515 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2991515 ']' 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2991515 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2991515 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2991515' 00:13:13.302 killing process with pid 2991515 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2991515 00:13:13.302 Received shutdown signal, test time was about 10.000000 seconds 00:13:13.302 00:13:13.302 Latency(us) 00:13:13.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.302 =================================================================================================================== 00:13:13.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.302 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2991515 00:13:13.560 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.817 11:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:14.074 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:14.074 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2988402 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2988402 00:13:14.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2988402 Killed "${NVMF_APP[@]}" "$@" 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2992992 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2992992 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2992992 ']' 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.334 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:14.593 [2024-07-15 11:39:22.342793] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:14.594 [2024-07-15 11:39:22.342896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.594 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.594 [2024-07-15 11:39:22.410913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.594 [2024-07-15 11:39:22.520294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.594 [2024-07-15 11:39:22.520357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.594 [2024-07-15 11:39:22.520386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.594 [2024-07-15 11:39:22.520398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.594 [2024-07-15 11:39:22.520408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.594 [2024-07-15 11:39:22.520442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.851 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:15.111 [2024-07-15 11:39:22.889931] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:15.111 [2024-07-15 11:39:22.890088] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:15.111 [2024-07-15 11:39:22.890135] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:15.111 11:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:15.369 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6aa08425-f68f-4377-bd9f-3f074100bfad -t 2000 00:13:15.627 [ 00:13:15.628 { 00:13:15.628 "name": "6aa08425-f68f-4377-bd9f-3f074100bfad", 00:13:15.628 "aliases": [ 00:13:15.628 "lvs/lvol" 00:13:15.628 ], 00:13:15.628 "product_name": "Logical Volume", 00:13:15.628 "block_size": 4096, 00:13:15.628 "num_blocks": 38912, 00:13:15.628 "uuid": "6aa08425-f68f-4377-bd9f-3f074100bfad", 00:13:15.628 "assigned_rate_limits": { 00:13:15.628 "rw_ios_per_sec": 0, 00:13:15.628 "rw_mbytes_per_sec": 0, 00:13:15.628 "r_mbytes_per_sec": 0, 00:13:15.628 "w_mbytes_per_sec": 0 00:13:15.628 }, 00:13:15.628 "claimed": false, 00:13:15.628 "zoned": false, 00:13:15.628 "supported_io_types": { 00:13:15.628 "read": true, 00:13:15.628 "write": true, 00:13:15.628 "unmap": true, 00:13:15.628 "flush": false, 00:13:15.628 "reset": true, 00:13:15.628 "nvme_admin": false, 00:13:15.628 "nvme_io": false, 00:13:15.628 "nvme_io_md": false, 00:13:15.628 "write_zeroes": true, 00:13:15.628 "zcopy": false, 00:13:15.628 "get_zone_info": false, 00:13:15.628 "zone_management": false, 00:13:15.628 "zone_append": false, 00:13:15.628 "compare": false, 00:13:15.628 "compare_and_write": false, 00:13:15.628 "abort": false, 00:13:15.628 "seek_hole": true, 00:13:15.628 "seek_data": true, 00:13:15.628 "copy": false, 00:13:15.628 "nvme_iov_md": false 00:13:15.628 }, 00:13:15.628 "driver_specific": { 00:13:15.628 "lvol": { 00:13:15.628 "lvol_store_uuid": "31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd", 00:13:15.628 "base_bdev": "aio_bdev", 00:13:15.628 "thin_provision": false, 00:13:15.628 "num_allocated_clusters": 38, 00:13:15.628 "snapshot": false, 00:13:15.628 "clone": false, 00:13:15.628 "esnap_clone": false 00:13:15.628 } 00:13:15.628 } 00:13:15.628 } 00:13:15.628 ] 00:13:15.628 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:15.628 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:15.628 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:15.886 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:15.886 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:15.886 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:16.145 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:16.145 11:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:16.404 [2024-07-15 11:39:24.179426] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:16.404 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:16.662 request: 00:13:16.662 { 00:13:16.662 "uuid": "31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd", 00:13:16.662 "method": "bdev_lvol_get_lvstores", 00:13:16.662 "req_id": 1 00:13:16.662 } 00:13:16.662 Got JSON-RPC error response 00:13:16.662 response: 00:13:16.662 { 00:13:16.662 "code": -19, 00:13:16.662 "message": "No such device" 00:13:16.662 } 00:13:16.662 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:16.662 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:16.662 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:16.662 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:16.662 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:16.920 aio_bdev 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:16.920 11:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:17.179 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6aa08425-f68f-4377-bd9f-3f074100bfad -t 2000 00:13:17.440 [ 00:13:17.440 { 00:13:17.440 "name": "6aa08425-f68f-4377-bd9f-3f074100bfad", 00:13:17.440 "aliases": [ 00:13:17.440 "lvs/lvol" 00:13:17.440 ], 00:13:17.440 "product_name": "Logical Volume", 00:13:17.440 "block_size": 4096, 00:13:17.440 "num_blocks": 38912, 00:13:17.440 "uuid": "6aa08425-f68f-4377-bd9f-3f074100bfad", 00:13:17.440 "assigned_rate_limits": { 00:13:17.440 "rw_ios_per_sec": 0, 00:13:17.440 "rw_mbytes_per_sec": 0, 00:13:17.440 "r_mbytes_per_sec": 0, 00:13:17.440 "w_mbytes_per_sec": 0 00:13:17.440 }, 00:13:17.440 "claimed": false, 00:13:17.440 "zoned": false, 00:13:17.440 "supported_io_types": { 00:13:17.440 "read": true, 00:13:17.440 "write": true, 00:13:17.440 "unmap": true, 00:13:17.440 "flush": false, 00:13:17.440 "reset": true, 00:13:17.440 "nvme_admin": false, 00:13:17.440 "nvme_io": false, 00:13:17.440 "nvme_io_md": false, 00:13:17.440 "write_zeroes": true, 00:13:17.440 "zcopy": false, 00:13:17.440 "get_zone_info": false, 00:13:17.440 "zone_management": false, 00:13:17.440 "zone_append": false, 00:13:17.440 "compare": false, 00:13:17.440 "compare_and_write": false, 00:13:17.440 "abort": false, 00:13:17.440 "seek_hole": true, 00:13:17.440 "seek_data": true, 00:13:17.440 "copy": false, 00:13:17.440 "nvme_iov_md": false 00:13:17.440 }, 00:13:17.440 "driver_specific": { 00:13:17.440 "lvol": { 00:13:17.440 "lvol_store_uuid": "31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd", 00:13:17.440 "base_bdev": "aio_bdev", 00:13:17.440 "thin_provision": false, 00:13:17.440 "num_allocated_clusters": 38, 00:13:17.440 "snapshot": false, 00:13:17.440 "clone": false, 00:13:17.440 "esnap_clone": false 00:13:17.440 } 00:13:17.440 } 00:13:17.440 } 00:13:17.440 ] 00:13:17.440 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:17.440 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:17.440 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:17.700 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:17.700 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:17.700 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:17.960 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:17.960 11:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6aa08425-f68f-4377-bd9f-3f074100bfad 00:13:18.219 11:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31d1dd0d-6474-432f-8c2e-d6d5f2ed6dfd 00:13:18.478 11:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:18.739 00:13:18.739 real 0m19.192s 00:13:18.739 user 0m48.057s 00:13:18.739 sys 0m5.097s 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:18.739 ************************************ 00:13:18.739 END TEST lvs_grow_dirty 00:13:18.739 ************************************ 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:18.739 nvmf_trace.0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.739 rmmod nvme_tcp 00:13:18.739 rmmod nvme_fabrics 00:13:18.739 rmmod nvme_keyring 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2992992 ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2992992 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2992992 ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2992992 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2992992 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2992992' 00:13:18.739 killing process with pid 2992992 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2992992 00:13:18.739 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2992992 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.999 11:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.536 11:39:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.536 00:13:21.536 real 0m42.041s 00:13:21.536 user 1m10.666s 00:13:21.536 sys 0m8.985s 00:13:21.536 11:39:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.536 11:39:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:21.536 ************************************ 00:13:21.536 END TEST nvmf_lvs_grow 00:13:21.536 ************************************ 00:13:21.536 11:39:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.536 11:39:29 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:21.536 11:39:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.536 11:39:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.536 11:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.537 ************************************ 00:13:21.537 START TEST nvmf_bdev_io_wait 00:13:21.537 ************************************ 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:21.537 * Looking for test storage... 00:13:21.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.537 11:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.449 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:23.450 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:23.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:23.450 Found net devices under 0000:84:00.0: cvl_0_0 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:23.450 Found net devices under 0000:84:00.1: cvl_0_1 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:13:23.450 00:13:23.450 --- 10.0.0.2 ping statistics --- 00:13:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.450 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:13:23.450 00:13:23.450 --- 10.0.0.1 ping statistics --- 00:13:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.450 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2995527 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2995527 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2995527 ']' 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.450 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.450 [2024-07-15 11:39:31.420002] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.450 [2024-07-15 11:39:31.420127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.710 [2024-07-15 11:39:31.488276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.710 [2024-07-15 11:39:31.601800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.710 [2024-07-15 11:39:31.601866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.710 [2024-07-15 11:39:31.601895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.710 [2024-07-15 11:39:31.601906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.710 [2024-07-15 11:39:31.601916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.710 [2024-07-15 11:39:31.602308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.710 [2024-07-15 11:39:31.602370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.710 [2024-07-15 11:39:31.602392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.710 [2024-07-15 11:39:31.602395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.710 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 [2024-07-15 11:39:31.731387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 Malloc0 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.969 [2024-07-15 11:39:31.794170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2995672 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2995674 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:23.969 { 00:13:23.969 "params": { 00:13:23.969 "name": "Nvme$subsystem", 00:13:23.969 "trtype": "$TEST_TRANSPORT", 00:13:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.969 "adrfam": "ipv4", 00:13:23.969 "trsvcid": "$NVMF_PORT", 00:13:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.969 "hdgst": ${hdgst:-false}, 00:13:23.969 "ddgst": ${ddgst:-false} 00:13:23.969 }, 00:13:23.969 "method": "bdev_nvme_attach_controller" 00:13:23.969 } 00:13:23.969 EOF 00:13:23.969 )") 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2995676 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:23.969 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:23.969 { 00:13:23.969 "params": { 00:13:23.969 "name": "Nvme$subsystem", 00:13:23.969 "trtype": "$TEST_TRANSPORT", 00:13:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.969 "adrfam": "ipv4", 00:13:23.969 "trsvcid": "$NVMF_PORT", 00:13:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.970 "hdgst": ${hdgst:-false}, 00:13:23.970 "ddgst": ${ddgst:-false} 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 } 00:13:23.970 EOF 00:13:23.970 )") 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2995679 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:23.970 { 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme$subsystem", 00:13:23.970 "trtype": "$TEST_TRANSPORT", 00:13:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "$NVMF_PORT", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.970 "hdgst": ${hdgst:-false}, 00:13:23.970 "ddgst": ${ddgst:-false} 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 } 00:13:23.970 EOF 00:13:23.970 )") 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:23.970 { 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme$subsystem", 00:13:23.970 "trtype": "$TEST_TRANSPORT", 00:13:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "$NVMF_PORT", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.970 "hdgst": ${hdgst:-false}, 00:13:23.970 "ddgst": ${ddgst:-false} 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 } 00:13:23.970 EOF 00:13:23.970 )") 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2995672 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme1", 00:13:23.970 "trtype": "tcp", 00:13:23.970 "traddr": "10.0.0.2", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "4420", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.970 "hdgst": false, 00:13:23.970 "ddgst": false 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 }' 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme1", 00:13:23.970 "trtype": "tcp", 00:13:23.970 "traddr": "10.0.0.2", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "4420", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.970 "hdgst": false, 00:13:23.970 "ddgst": false 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 }' 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme1", 00:13:23.970 "trtype": "tcp", 00:13:23.970 "traddr": "10.0.0.2", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "4420", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.970 "hdgst": false, 00:13:23.970 "ddgst": false 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 }' 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:23.970 11:39:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:23.970 "params": { 00:13:23.970 "name": "Nvme1", 00:13:23.970 "trtype": "tcp", 00:13:23.970 "traddr": "10.0.0.2", 00:13:23.970 "adrfam": "ipv4", 00:13:23.970 "trsvcid": "4420", 00:13:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.970 "hdgst": false, 00:13:23.970 "ddgst": false 00:13:23.970 }, 00:13:23.970 "method": "bdev_nvme_attach_controller" 00:13:23.970 }' 00:13:23.970 [2024-07-15 11:39:31.840556] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.970 [2024-07-15 11:39:31.840557] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.970 [2024-07-15 11:39:31.840556] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.970 [2024-07-15 11:39:31.840643] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 11:39:31.840644] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 11:39:31.840644] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:23.970 --proc-type=auto ] 00:13:23.970 --proc-type=auto ] 00:13:23.970 [2024-07-15 11:39:31.842956] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.970 [2024-07-15 11:39:31.843016] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:23.970 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.228 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.228 [2024-07-15 11:39:32.013855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.228 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.228 [2024-07-15 11:39:32.111573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:24.228 [2024-07-15 11:39:32.112914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.228 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.228 [2024-07-15 11:39:32.211414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:24.228 [2024-07-15 11:39:32.213371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.488 [2024-07-15 11:39:32.314076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:24.488 [2024-07-15 11:39:32.316990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.488 [2024-07-15 11:39:32.419620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:24.748 Running I/O for 1 seconds... 00:13:24.748 Running I/O for 1 seconds... 00:13:24.748 Running I/O for 1 seconds... 00:13:25.006 Running I/O for 1 seconds... 00:13:25.941 00:13:25.941 Latency(us) 00:13:25.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.941 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:25.941 Nvme1n1 : 1.01 11968.12 46.75 0.00 0.00 10658.18 5776.88 17670.45 00:13:25.941 =================================================================================================================== 00:13:25.941 Total : 11968.12 46.75 0.00 0.00 10658.18 5776.88 17670.45 00:13:25.941 00:13:25.941 Latency(us) 00:13:25.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.941 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:25.941 Nvme1n1 : 1.01 8554.19 33.41 0.00 0.00 14885.10 9563.40 24272.59 00:13:25.941 =================================================================================================================== 00:13:25.941 Total : 8554.19 33.41 0.00 0.00 14885.10 9563.40 24272.59 00:13:25.941 00:13:25.941 Latency(us) 00:13:25.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.941 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:25.941 Nvme1n1 : 1.00 199903.23 780.87 0.00 0.00 637.75 267.00 873.81 00:13:25.941 =================================================================================================================== 00:13:25.941 Total : 199903.23 780.87 0.00 0.00 637.75 267.00 873.81 00:13:25.941 00:13:25.941 Latency(us) 00:13:25.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.941 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:25.941 Nvme1n1 : 1.01 8591.23 33.56 0.00 0.00 14833.40 6990.51 26796.94 00:13:25.941 =================================================================================================================== 00:13:25.941 Total : 8591.23 33.56 0.00 0.00 14833.40 6990.51 26796.94 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2995674 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2995676 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2995679 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.200 rmmod nvme_tcp 00:13:26.200 rmmod nvme_fabrics 00:13:26.200 rmmod nvme_keyring 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2995527 ']' 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2995527 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2995527 ']' 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2995527 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2995527 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2995527' 00:13:26.200 killing process with pid 2995527 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2995527 00:13:26.200 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2995527 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.458 11:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.997 11:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.997 00:13:28.997 real 0m7.424s 00:13:28.997 user 0m17.725s 00:13:28.997 sys 0m3.676s 00:13:28.997 11:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.997 11:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:28.997 ************************************ 00:13:28.997 END TEST nvmf_bdev_io_wait 00:13:28.997 ************************************ 00:13:28.997 11:39:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.997 11:39:36 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:28.997 11:39:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.997 11:39:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.997 11:39:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.997 ************************************ 00:13:28.997 START TEST nvmf_queue_depth 00:13:28.997 ************************************ 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:28.997 * Looking for test storage... 00:13:28.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.997 11:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.998 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.998 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.998 11:39:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.998 11:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:30.903 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:30.903 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:30.903 Found net devices under 0000:84:00.0: cvl_0_0 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:30.903 Found net devices under 0000:84:00.1: cvl_0_1 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.903 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:13:30.904 00:13:30.904 --- 10.0.0.2 ping statistics --- 00:13:30.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.904 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:30.904 00:13:30.904 --- 10.0.0.1 ping statistics --- 00:13:30.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.904 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.904 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2997917 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2997917 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2997917 ']' 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.163 11:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.163 [2024-07-15 11:39:38.943091] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:31.163 [2024-07-15 11:39:38.943176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.163 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.163 [2024-07-15 11:39:39.006226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.163 [2024-07-15 11:39:39.111947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.163 [2024-07-15 11:39:39.111997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.163 [2024-07-15 11:39:39.112020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.163 [2024-07-15 11:39:39.112031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.163 [2024-07-15 11:39:39.112041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.163 [2024-07-15 11:39:39.112081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.136 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 [2024-07-15 11:39:39.903081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 Malloc0 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 [2024-07-15 11:39:39.964883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2998064 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2998064 /var/tmp/bdevperf.sock 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2998064 ']' 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.137 11:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.137 [2024-07-15 11:39:40.014097] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:32.137 [2024-07-15 11:39:40.014196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998064 ] 00:13:32.137 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.137 [2024-07-15 11:39:40.075822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.422 [2024-07-15 11:39:40.183667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.422 11:39:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.422 11:39:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:32.422 11:39:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:32.422 11:39:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.422 11:39:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.682 NVMe0n1 00:13:32.682 11:39:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.682 11:39:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.682 Running I/O for 10 seconds... 00:13:44.898 00:13:44.898 Latency(us) 00:13:44.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.898 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:44.898 Verification LBA range: start 0x0 length 0x4000 00:13:44.898 NVMe0n1 : 10.09 9834.33 38.42 0.00 0.00 103751.74 21845.33 66021.45 00:13:44.898 =================================================================================================================== 00:13:44.898 Total : 9834.33 38.42 0.00 0.00 103751.74 21845.33 66021.45 00:13:44.899 0 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2998064 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2998064 ']' 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2998064 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2998064 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2998064' 00:13:44.899 killing process with pid 2998064 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2998064 00:13:44.899 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.899 00:13:44.899 Latency(us) 00:13:44.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.899 =================================================================================================================== 00:13:44.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.899 11:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2998064 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.899 rmmod nvme_tcp 00:13:44.899 rmmod nvme_fabrics 00:13:44.899 rmmod nvme_keyring 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2997917 ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2997917 ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2997917' 00:13:44.899 killing process with pid 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2997917 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.899 11:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.467 11:39:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.467 00:13:45.467 real 0m16.908s 00:13:45.467 user 0m23.220s 00:13:45.467 sys 0m3.554s 00:13:45.467 11:39:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.467 11:39:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.467 ************************************ 00:13:45.467 END TEST nvmf_queue_depth 00:13:45.467 ************************************ 00:13:45.725 11:39:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:45.725 11:39:53 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:45.725 11:39:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:45.725 11:39:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.725 11:39:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.725 ************************************ 00:13:45.725 START TEST nvmf_target_multipath 00:13:45.725 ************************************ 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:45.725 * Looking for test storage... 00:13:45.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.725 11:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:47.629 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.629 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:47.630 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:47.630 Found net devices under 0000:84:00.0: cvl_0_0 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:47.630 Found net devices under 0000:84:00.1: cvl_0_1 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.630 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:13:47.889 00:13:47.889 --- 10.0.0.2 ping statistics --- 00:13:47.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.889 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:13:47.889 00:13:47.889 --- 10.0.0.1 ping statistics --- 00:13:47.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.889 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:47.889 only one NIC for nvmf test 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.889 rmmod nvme_tcp 00:13:47.889 rmmod nvme_fabrics 00:13:47.889 rmmod nvme_keyring 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.889 11:39:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:50.426 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.427 00:13:50.427 real 0m4.332s 00:13:50.427 user 0m0.799s 00:13:50.427 sys 0m1.531s 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.427 11:39:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:50.427 ************************************ 00:13:50.427 END TEST nvmf_target_multipath 00:13:50.427 ************************************ 00:13:50.427 11:39:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.427 11:39:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:50.427 11:39:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.427 11:39:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.427 11:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.427 ************************************ 00:13:50.427 START TEST nvmf_zcopy 00:13:50.427 ************************************ 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:50.427 * Looking for test storage... 00:13:50.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.427 11:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:52.335 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:52.335 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:52.335 Found net devices under 0000:84:00.0: cvl_0_0 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:52.335 Found net devices under 0000:84:00.1: cvl_0_1 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:13:52.335 00:13:52.335 --- 10.0.0.2 ping statistics --- 00:13:52.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.335 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:13:52.335 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:52.335 00:13:52.336 --- 10.0.0.1 ping statistics --- 00:13:52.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.336 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3003159 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3003159 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3003159 ']' 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.336 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.336 [2024-07-15 11:40:00.241404] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:52.336 [2024-07-15 11:40:00.241479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.336 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.336 [2024-07-15 11:40:00.308395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.595 [2024-07-15 11:40:00.414639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.595 [2024-07-15 11:40:00.414699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.595 [2024-07-15 11:40:00.414721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.595 [2024-07-15 11:40:00.414731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.595 [2024-07-15 11:40:00.414763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.595 [2024-07-15 11:40:00.414791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.595 [2024-07-15 11:40:00.561482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.595 [2024-07-15 11:40:00.577680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:52.595 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 malloc0 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.856 { 00:13:52.856 "params": { 00:13:52.856 "name": "Nvme$subsystem", 00:13:52.856 "trtype": "$TEST_TRANSPORT", 00:13:52.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.856 "adrfam": "ipv4", 00:13:52.856 "trsvcid": "$NVMF_PORT", 00:13:52.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.856 "hdgst": ${hdgst:-false}, 00:13:52.856 "ddgst": ${ddgst:-false} 00:13:52.856 }, 00:13:52.856 "method": "bdev_nvme_attach_controller" 00:13:52.856 } 00:13:52.856 EOF 00:13:52.856 )") 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:52.856 11:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.856 "params": { 00:13:52.856 "name": "Nvme1", 00:13:52.856 "trtype": "tcp", 00:13:52.856 "traddr": "10.0.0.2", 00:13:52.856 "adrfam": "ipv4", 00:13:52.856 "trsvcid": "4420", 00:13:52.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.856 "hdgst": false, 00:13:52.856 "ddgst": false 00:13:52.856 }, 00:13:52.856 "method": "bdev_nvme_attach_controller" 00:13:52.856 }' 00:13:52.856 [2024-07-15 11:40:00.660377] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:52.856 [2024-07-15 11:40:00.660454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003294 ] 00:13:52.856 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.856 [2024-07-15 11:40:00.725935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.856 [2024-07-15 11:40:00.834255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.425 Running I/O for 10 seconds... 00:14:03.413 00:14:03.413 Latency(us) 00:14:03.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.413 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:03.413 Verification LBA range: start 0x0 length 0x1000 00:14:03.413 Nvme1n1 : 10.01 6506.56 50.83 0.00 0.00 19621.98 2621.44 28156.21 00:14:03.413 =================================================================================================================== 00:14:03.413 Total : 6506.56 50.83 0.00 0.00 19621.98 2621.44 28156.21 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3004498 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.672 { 00:14:03.672 "params": { 00:14:03.672 "name": "Nvme$subsystem", 00:14:03.672 "trtype": "$TEST_TRANSPORT", 00:14:03.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.672 "adrfam": "ipv4", 00:14:03.672 "trsvcid": "$NVMF_PORT", 00:14:03.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.672 "hdgst": ${hdgst:-false}, 00:14:03.672 "ddgst": ${ddgst:-false} 00:14:03.672 }, 00:14:03.672 "method": "bdev_nvme_attach_controller" 00:14:03.672 } 00:14:03.672 EOF 00:14:03.672 )") 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:03.672 [2024-07-15 11:40:11.454383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.454428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:03.672 11:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.672 "params": { 00:14:03.672 "name": "Nvme1", 00:14:03.672 "trtype": "tcp", 00:14:03.672 "traddr": "10.0.0.2", 00:14:03.672 "adrfam": "ipv4", 00:14:03.672 "trsvcid": "4420", 00:14:03.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.672 "hdgst": false, 00:14:03.672 "ddgst": false 00:14:03.672 }, 00:14:03.672 "method": "bdev_nvme_attach_controller" 00:14:03.672 }' 00:14:03.672 [2024-07-15 11:40:11.462331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.462353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.470349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.470369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.478369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.478389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.486390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.486409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.491248] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:03.672 [2024-07-15 11:40:11.491336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004498 ] 00:14:03.672 [2024-07-15 11:40:11.494413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.494434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.502436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.502455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.510457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.510476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.518479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.518498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.672 [2024-07-15 11:40:11.526502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.526522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.534525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.534545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.542546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.542566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.550567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.550587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.551240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.672 [2024-07-15 11:40:11.558636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.558679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.566650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.566684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.672 [2024-07-15 11:40:11.574636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.672 [2024-07-15 11:40:11.574657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.582657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.582677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.590680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.590701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.598700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.598735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.606736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.606765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.614788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.614819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.622836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.622875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.630823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.630846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.638833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.638855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.646850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.646872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.673 [2024-07-15 11:40:11.654872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.673 [2024-07-15 11:40:11.654895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.662895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.662918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.670277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.932 [2024-07-15 11:40:11.670915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.670937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.678936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.678957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.686996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.687048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.695038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.695079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.703059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.703111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.711103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.711141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.719122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.719164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.727142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.727180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.735128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.735150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.743168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.743211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.751211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.751253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.759205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.759229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.767198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.767219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.775221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.775241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.783243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.783263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.791274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.791300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.799293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.799315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.807313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.807335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.815333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.815357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.823354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.823376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.831378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.831402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.839393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.839414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.847420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.847446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.855433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.855464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 Running I/O for 5 seconds... 00:14:03.932 [2024-07-15 11:40:11.863458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.863479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.876987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.877028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.887478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.887504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.898296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.898320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.932 [2024-07-15 11:40:11.910351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.932 [2024-07-15 11:40:11.910376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.920574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.920601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.931387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.931411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.944078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.944103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.954249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.954274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.964431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.964457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.975001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.975048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.987101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.987126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:11.996542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:11.996567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.006706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.006754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.016986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.017012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.027206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.027231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.037428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.037453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.049770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.049796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.059700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.059756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.070168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.070192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.080361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.080386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.090325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.090349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.100588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.100613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.110666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.110691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.120910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.120938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.131522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.131547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.141583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.141609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.151679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.151704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.161789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.161816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.192 [2024-07-15 11:40:12.172444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.192 [2024-07-15 11:40:12.172469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.183501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.183528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.193825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.193852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.204252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.204277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.216282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.216307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.225248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.225274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.236395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.236419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.246664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.246689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.256650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.256683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.267190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.267216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.277535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.277559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.287799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.287825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.300344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.300368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.310126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.310151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.320569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.320593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.330487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.330511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.341003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.341047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.353348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.353373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.362298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.362323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.372557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.372581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.383030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.383056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.396232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.396257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.407106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.407131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.415864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.415889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.452 [2024-07-15 11:40:12.426793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.452 [2024-07-15 11:40:12.426820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.439216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.439243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.449567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.449592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.459745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.459771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.470150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.470174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.482549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.482574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.491903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.491929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.502757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.502783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.514940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.514967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.524626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.524650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.535153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.535178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.545642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.545667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.557544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.557569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.566593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.566618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.577775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.577801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.590199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.590223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.599774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.599799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.609735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.609770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.620037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.620063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.630188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.630213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.640629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.640653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.651054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.651080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.662599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.662624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.671998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.672039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.682570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.682595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.712 [2024-07-15 11:40:12.695321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.712 [2024-07-15 11:40:12.695346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.705872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.705899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.715752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.715793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.726065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.726106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.736606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.736631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.746938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.746964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.757223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.757249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.767568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.767593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.777678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.777703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.787822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.787849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.797809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.797835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.808375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.808401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.820699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.820746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.830707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.830756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.840891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.840917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.851232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.851258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.863669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.863696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.873513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.873553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.883558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.883583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.893951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.893979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.906665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.970 [2024-07-15 11:40:12.906689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.970 [2024-07-15 11:40:12.917963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.971 [2024-07-15 11:40:12.917990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.971 [2024-07-15 11:40:12.926901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.971 [2024-07-15 11:40:12.926927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.971 [2024-07-15 11:40:12.937607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.971 [2024-07-15 11:40:12.937632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.971 [2024-07-15 11:40:12.947690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.971 [2024-07-15 11:40:12.947714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:12.957932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:12.957975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:12.971073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:12.971113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:12.980945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:12.980970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:12.991125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:12.991149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.001232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.001257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.010982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.011008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.020796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.020822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.031321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.031349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.043216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.043240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.054405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.054429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.062999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.063039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.075616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.075641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.085628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.085655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.096310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.096334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.108255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.108280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.117364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.117388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.127866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.127892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.137887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.137913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.148335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.148359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.160240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.160264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.169799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.169825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.179715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.179763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.189767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.189793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.200002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.200045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.228 [2024-07-15 11:40:13.210222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.228 [2024-07-15 11:40:13.210247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.220851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.220878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.233242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.233266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.242842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.242868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.252849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.252882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.262925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.262950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.273200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.273225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.284039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.284064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.296102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.296125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.306179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.306203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.316185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.316210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.325975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.326001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.336334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.336359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.348855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.348882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.360334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.360359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.369003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.369044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.381754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.381812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.391806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.391832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.402153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.402178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.412134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.412159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.422007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.422046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.432059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.432100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.442161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.442186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.452112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.452143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.462344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.462369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.486 [2024-07-15 11:40:13.472624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.486 [2024-07-15 11:40:13.472650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.484513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.484538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.493599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.493624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.505076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.505116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.516219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.516245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.524950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.524976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.535314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.535338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.545140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.545166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.555094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.555120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.564912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.564938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.574760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.745 [2024-07-15 11:40:13.574786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.745 [2024-07-15 11:40:13.584655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.584680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.594597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.594622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.604935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.604961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.615131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.615156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.626915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.626941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.636193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.636218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.646122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.646156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.656153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.656178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.665936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.665963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.676329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.676354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.688208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.688233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.697605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.697630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.707878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.707904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.719507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.719531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.746 [2024-07-15 11:40:13.729103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.746 [2024-07-15 11:40:13.729128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.740475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.740501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.751268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.751293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.761272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.761297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.771125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.771151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.781317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.781342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.791330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.791355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.801082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.801107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.811267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.811292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.821734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.821769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.834072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.834097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.843448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.843480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.853466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.853491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.863777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.863803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.877272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.877297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.886767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.886794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.897313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.897338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.909443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.909468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.918998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.919040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.928727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.928763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.939048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.939091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.949397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.949423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.959960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.959989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.973635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.973660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.004 [2024-07-15 11:40:13.985496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.004 [2024-07-15 11:40:13.985521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.263 [2024-07-15 11:40:13.995465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.263 [2024-07-15 11:40:13.995491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.263 [2024-07-15 11:40:14.006732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.263 [2024-07-15 11:40:14.006769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.263 [2024-07-15 11:40:14.017306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.263 [2024-07-15 11:40:14.017331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.263 [2024-07-15 11:40:14.027930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.027957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.038310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.038334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.048645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.048669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.060471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.060496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.070055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.070080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.080163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.080188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.090490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.090515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.100882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.100908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.110995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.111036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.121411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.121436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.131696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.131735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.144144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.144169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.153682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.153707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.163939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.163965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.174819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.174845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.184637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.184661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.194405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.194429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.204602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.204627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.214476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.214501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.224557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.224582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.234845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.234871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.264 [2024-07-15 11:40:14.244928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.264 [2024-07-15 11:40:14.244954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.256129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.256155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.266620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.266644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.276928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.276955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.287232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.287257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.296964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.296990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.308534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.308558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.317706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.317754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.328408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.328432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.339092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.339117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.349560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.349584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.360244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.360272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.370608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.370634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.381046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.381072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.392921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.392947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.403165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.403190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.413666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.413692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.426574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.426599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.436464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.436489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.447912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.447940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.458437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.458462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.469082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.469107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.479644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.479670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.489892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.489919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.524 [2024-07-15 11:40:14.502700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.524 [2024-07-15 11:40:14.502749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.512318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.512345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.523627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.523653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.533956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.533983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.544542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.544567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.555321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.555346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.567552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.567577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.577802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.577829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.588522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.588548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.599032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.599059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.609318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.609344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.621746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.621772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.631330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.631356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.641832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.641859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.652575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.652601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.663464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.663490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.675445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.675472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.685448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.685473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.695520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.695546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.706258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.706284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.718655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.718681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.728580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.783 [2024-07-15 11:40:14.728605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.783 [2024-07-15 11:40:14.739331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.784 [2024-07-15 11:40:14.739357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.784 [2024-07-15 11:40:14.750147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.784 [2024-07-15 11:40:14.750173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.784 [2024-07-15 11:40:14.760392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.784 [2024-07-15 11:40:14.760419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.771165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.771193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.782176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.782202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.792766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.792793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.803273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.803299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.813785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.813812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.824464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.824490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.836748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.836774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.846454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.846492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.857196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.857222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.869109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.869134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.879186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.879212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.890556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.890581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.901159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.901186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.911296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.911321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.921468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.921493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.931373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.931398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.941478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.941502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.953438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.953463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.962070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.962109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.972495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.972520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.982277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.982303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:14.992524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:14.992549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:15.002607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:15.002634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:15.012663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:15.012688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.042 [2024-07-15 11:40:15.022772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.042 [2024-07-15 11:40:15.022800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.033998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.034040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.046425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.046458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.055865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.055893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.068224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.068250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.078650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.078678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.088930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.088956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.099224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.099250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.109208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.109232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.119095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.119120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.129235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.129260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.139306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.139332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.151456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.151480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.162705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.162753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.171834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.171861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.182462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.182487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.192694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.192733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.203019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.203058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.213004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.213045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.223454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.223489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.235784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.235810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.245410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.245441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.255226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.255251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.264975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.265001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.275843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.275870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.302 [2024-07-15 11:40:15.286678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.302 [2024-07-15 11:40:15.286704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.297248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.297274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.309458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.309483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.318871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.318898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.330070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.330110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.342073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.342114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.351584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.351609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.362309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.362334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.372774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.372800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.383381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.383405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.393993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.394034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.403884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.403910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.414420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.414445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.426498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.426523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.436106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.436131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.446383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.446414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.456974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.457001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.469600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.469625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.479243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.479268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.488998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.489038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.499167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.499192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.509189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.509213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.519320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.519345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.529390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.529415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.562 [2024-07-15 11:40:15.540434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.562 [2024-07-15 11:40:15.540459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.550934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.550962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.562689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.562728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.572108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.572133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.582465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.582489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.594273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.594298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.603978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.604004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.614191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.614217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.624240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.624265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.634092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.634117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.644272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.644297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.654527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.654552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.664614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.664638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.674708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.674755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.684564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.684590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.694939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.694965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.707202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.707227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.716957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.716983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.727318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.727342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.737413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.737438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.747801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.747827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.759949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.759975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.771291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.771317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.780217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.780242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.791117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.791141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:07.822 [2024-07-15 11:40:15.803587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:07.822 [2024-07-15 11:40:15.803613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.814216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.814242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.824496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.824520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.835420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.835446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.847707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.847759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.859316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.859341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.868556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.868582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.879711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.879762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.890757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.890783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.901147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.901172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.081 [2024-07-15 11:40:15.911752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.081 [2024-07-15 11:40:15.911779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.922337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.922362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.934402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.934435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.946206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.946237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.954886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.954912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.967041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.967066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.976765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.976792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.986832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.986857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:15.996855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:15.996881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.007243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.007268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.017430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.017454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.027549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.027582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.037525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.037550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.047617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.047642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.082 [2024-07-15 11:40:16.058294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.082 [2024-07-15 11:40:16.058319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.071161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.071186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.080309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.080334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.095584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.095610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.105384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.105409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.115199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.115224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.125020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.125062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.135182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.135207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.145287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.145312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.155265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.155290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.165586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.165611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.175834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.175861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.186319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.186344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.196149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.196174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.207197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.207225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.217534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.217559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.227699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.227759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.340 [2024-07-15 11:40:16.238074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.340 [2024-07-15 11:40:16.238114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.248159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.248184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.258380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.258405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.268522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.268547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.278401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.278426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.288770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.288798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.299106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.299131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.309015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.309060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.341 [2024-07-15 11:40:16.319302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.341 [2024-07-15 11:40:16.319327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.332296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.332321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.342136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.342175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.352588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.352612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.363017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.363055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.375461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.375485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.386706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.386753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.395574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.395598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.406437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.406462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.419031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.419067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.430357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.430382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.439452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.439484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.450303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.450328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.462863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.462889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.474191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.474216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.483411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.483436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.494517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.494542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.506462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.506487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.516922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.516949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.527400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.527425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.539603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.539628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.550095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.550121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.560648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.560673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.572864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.572890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.601 [2024-07-15 11:40:16.582038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.601 [2024-07-15 11:40:16.582080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.594214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.594240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.603610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.603635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.614099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.614124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.624226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.624252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.634092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.634118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.644357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.644389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.654404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.654428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.664889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.664914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.675261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.675285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.687537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.687562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.697067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.697106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.861 [2024-07-15 11:40:16.707487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.861 [2024-07-15 11:40:16.707512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.717527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.717552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.727355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.727381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.737498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.737523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.748190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.748214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.758360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.758385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.768626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.768650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.780813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.780839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.790417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.790442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.800866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.800892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.810901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.810927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.821091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.821116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.831076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.831101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.862 [2024-07-15 11:40:16.841413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.862 [2024-07-15 11:40:16.841445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.851991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.852019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.862293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.862318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.872626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.872650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.880224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.880249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 00:14:09.147 Latency(us) 00:14:09.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.147 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:09.147 Nvme1n1 : 5.01 12356.76 96.54 0.00 0.00 10344.89 4369.07 18544.26 00:14:09.147 =================================================================================================================== 00:14:09.147 Total : 12356.76 96.54 0.00 0.00 10344.89 4369.07 18544.26 00:14:09.147 [2024-07-15 11:40:16.886661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.886685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.894681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.894704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.902703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.902727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.910803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.910853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.918814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.918862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.926845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.926890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.934848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.934894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.942880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.942929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.950896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.950944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.958903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.958948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.966927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.966971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.974957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.975031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.982989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.983043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.990998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.991049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:16.999015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:16.999061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.007038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.007086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.015066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.015122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.023038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.023060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.031056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.031077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.039073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.039095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.047106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.047127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.055168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.055216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.063199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.063253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.071201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.071243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.079166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.079187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.087185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.087205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.095207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.095227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.103238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.103260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.111327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.111378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.119337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.119387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.147 [2024-07-15 11:40:17.127306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.147 [2024-07-15 11:40:17.127330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.406 [2024-07-15 11:40:17.135316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.406 [2024-07-15 11:40:17.135338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.406 [2024-07-15 11:40:17.143335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.406 [2024-07-15 11:40:17.143354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3004498) - No such process 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3004498 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.406 delay0 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.406 11:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:09.406 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.406 [2024-07-15 11:40:17.220246] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:17.546 Initializing NVMe Controllers 00:14:17.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.546 Initialization complete. Launching workers. 00:14:17.546 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 17426 00:14:17.546 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17599, failed to submit 91 00:14:17.546 success 17480, unsuccess 119, failed 0 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.546 rmmod nvme_tcp 00:14:17.546 rmmod nvme_fabrics 00:14:17.546 rmmod nvme_keyring 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3003159 ']' 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3003159 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3003159 ']' 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3003159 00:14:17.546 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3003159 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3003159' 00:14:17.547 killing process with pid 3003159 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3003159 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3003159 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.547 11:40:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.923 11:40:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.924 00:14:18.924 real 0m28.812s 00:14:18.924 user 0m40.479s 00:14:18.924 sys 0m10.758s 00:14:18.924 11:40:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.924 11:40:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:18.924 ************************************ 00:14:18.924 END TEST nvmf_zcopy 00:14:18.924 ************************************ 00:14:18.924 11:40:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:18.924 11:40:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:18.924 11:40:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.924 11:40:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.924 11:40:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.924 ************************************ 00:14:18.924 START TEST nvmf_nmic 00:14:18.924 ************************************ 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:18.924 * Looking for test storage... 00:14:18.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.924 11:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:20.840 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:20.840 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:20.840 Found net devices under 0000:84:00.0: cvl_0_0 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:20.840 Found net devices under 0000:84:00.1: cvl_0_1 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.840 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:14:21.099 00:14:21.099 --- 10.0.0.2 ping statistics --- 00:14:21.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.099 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:14:21.099 00:14:21.099 --- 10.0.0.1 ping statistics --- 00:14:21.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.099 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3008014 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3008014 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3008014 ']' 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.099 11:40:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.099 [2024-07-15 11:40:28.991194] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:21.099 [2024-07-15 11:40:28.991291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.099 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.099 [2024-07-15 11:40:29.055250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.358 [2024-07-15 11:40:29.161100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.358 [2024-07-15 11:40:29.161156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.358 [2024-07-15 11:40:29.161176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.358 [2024-07-15 11:40:29.161187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.358 [2024-07-15 11:40:29.161201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.358 [2024-07-15 11:40:29.161283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.358 [2024-07-15 11:40:29.161392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.358 [2024-07-15 11:40:29.161489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.358 [2024-07-15 11:40:29.161495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.358 [2024-07-15 11:40:29.315672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.358 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 Malloc0 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 [2024-07-15 11:40:29.366975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:21.616 test case1: single bdev can't be used in multiple subsystems 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 [2024-07-15 11:40:29.390853] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:21.616 [2024-07-15 11:40:29.390884] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:21.616 [2024-07-15 11:40:29.390899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.616 request: 00:14:21.616 { 00:14:21.616 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:21.616 "namespace": { 00:14:21.616 "bdev_name": "Malloc0", 00:14:21.616 "no_auto_visible": false 00:14:21.616 }, 00:14:21.616 "method": "nvmf_subsystem_add_ns", 00:14:21.616 "req_id": 1 00:14:21.616 } 00:14:21.616 Got JSON-RPC error response 00:14:21.616 response: 00:14:21.616 { 00:14:21.616 "code": -32602, 00:14:21.616 "message": "Invalid parameters" 00:14:21.616 } 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:21.616 Adding namespace failed - expected result. 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:21.616 test case2: host connect to nvmf target in multiple paths 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:21.616 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.617 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:21.617 [2024-07-15 11:40:29.398960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:21.617 11:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.617 11:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.185 11:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:22.753 11:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.753 11:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:22.753 11:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.753 11:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:22.753 11:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:25.280 11:40:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:25.280 [global] 00:14:25.280 thread=1 00:14:25.280 invalidate=1 00:14:25.280 rw=write 00:14:25.280 time_based=1 00:14:25.280 runtime=1 00:14:25.280 ioengine=libaio 00:14:25.280 direct=1 00:14:25.280 bs=4096 00:14:25.280 iodepth=1 00:14:25.280 norandommap=0 00:14:25.280 numjobs=1 00:14:25.280 00:14:25.280 verify_dump=1 00:14:25.280 verify_backlog=512 00:14:25.280 verify_state_save=0 00:14:25.280 do_verify=1 00:14:25.280 verify=crc32c-intel 00:14:25.280 [job0] 00:14:25.280 filename=/dev/nvme0n1 00:14:25.280 Could not set queue depth (nvme0n1) 00:14:25.280 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:25.280 fio-3.35 00:14:25.280 Starting 1 thread 00:14:26.213 00:14:26.213 job0: (groupid=0, jobs=1): err= 0: pid=3008532: Mon Jul 15 11:40:34 2024 00:14:26.213 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:14:26.213 slat (nsec): min=9860, max=41020, avg=21643.50, stdev=9770.60 00:14:26.213 clat (usec): min=40809, max=41348, avg=40985.75, stdev=107.45 00:14:26.213 lat (usec): min=40843, max=41357, avg=41007.39, stdev=101.93 00:14:26.213 clat percentiles (usec): 00:14:26.213 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:26.213 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:26.213 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:26.213 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:26.213 | 99.99th=[41157] 00:14:26.213 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:14:26.213 slat (nsec): min=7056, max=40278, avg=11071.01, stdev=6280.85 00:14:26.213 clat (usec): min=129, max=399, avg=180.61, stdev=48.20 00:14:26.213 lat (usec): min=136, max=435, avg=191.68, stdev=52.04 00:14:26.213 clat percentiles (usec): 00:14:26.213 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:14:26.213 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 172], 00:14:26.213 | 70.00th=[ 184], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 285], 00:14:26.213 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 400], 99.95th=[ 400], 00:14:26.213 | 99.99th=[ 400] 00:14:26.213 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:26.213 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:26.213 lat (usec) : 250=87.27%, 500=8.61% 00:14:26.213 lat (msec) : 50=4.12% 00:14:26.213 cpu : usr=1.00%, sys=0.20%, ctx=534, majf=0, minf=2 00:14:26.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.213 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.213 00:14:26.213 Run status group 0 (all jobs): 00:14:26.213 READ: bw=87.8KiB/s (89.9kB/s), 87.8KiB/s-87.8KiB/s (89.9kB/s-89.9kB/s), io=88.0KiB (90.1kB), run=1002-1002msec 00:14:26.213 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:14:26.213 00:14:26.213 Disk stats (read/write): 00:14:26.213 nvme0n1: ios=69/512, merge=0/0, ticks=812/93, in_queue=905, util=92.08% 00:14:26.213 11:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:26.471 11:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.472 rmmod nvme_tcp 00:14:26.472 rmmod nvme_fabrics 00:14:26.472 rmmod nvme_keyring 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3008014 ']' 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3008014 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3008014 ']' 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3008014 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3008014 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3008014' 00:14:26.472 killing process with pid 3008014 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3008014 00:14:26.472 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3008014 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.730 11:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.267 11:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.267 00:14:29.267 real 0m9.904s 00:14:29.267 user 0m22.583s 00:14:29.267 sys 0m2.293s 00:14:29.267 11:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.267 11:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:29.267 ************************************ 00:14:29.267 END TEST nvmf_nmic 00:14:29.267 ************************************ 00:14:29.267 11:40:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.267 11:40:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:29.267 11:40:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.267 11:40:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.267 11:40:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.267 ************************************ 00:14:29.267 START TEST nvmf_fio_target 00:14:29.267 ************************************ 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:29.268 * Looking for test storage... 00:14:29.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.268 11:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:31.174 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:31.174 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:31.175 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:31.175 Found net devices under 0000:84:00.0: cvl_0_0 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:31.175 Found net devices under 0000:84:00.1: cvl_0_1 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:14:31.175 00:14:31.175 --- 10.0.0.2 ping statistics --- 00:14:31.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.175 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:14:31.175 00:14:31.175 --- 10.0.0.1 ping statistics --- 00:14:31.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.175 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3010658 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3010658 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3010658 ']' 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.175 11:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.175 [2024-07-15 11:40:39.031294] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:31.175 [2024-07-15 11:40:39.031382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.175 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.176 [2024-07-15 11:40:39.100136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.434 [2024-07-15 11:40:39.212756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.434 [2024-07-15 11:40:39.212841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.434 [2024-07-15 11:40:39.212855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.434 [2024-07-15 11:40:39.212866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.434 [2024-07-15 11:40:39.212883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.434 [2024-07-15 11:40:39.212941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.434 [2024-07-15 11:40:39.212968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.434 [2024-07-15 11:40:39.213025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.434 [2024-07-15 11:40:39.213028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.434 11:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.692 [2024-07-15 11:40:39.636537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.692 11:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.951 11:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:31.951 11:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.518 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:32.518 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.518 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:32.518 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.776 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:32.776 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:33.034 11:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.292 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:33.292 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.550 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:33.550 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.808 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:33.808 11:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:34.066 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:34.324 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:34.324 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:34.583 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:34.583 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:34.840 11:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.098 [2024-07-15 11:40:42.984523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.098 11:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:35.356 11:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:35.615 11:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:36.550 11:40:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:38.456 11:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:38.456 [global] 00:14:38.456 thread=1 00:14:38.456 invalidate=1 00:14:38.456 rw=write 00:14:38.456 time_based=1 00:14:38.456 runtime=1 00:14:38.456 ioengine=libaio 00:14:38.456 direct=1 00:14:38.456 bs=4096 00:14:38.456 iodepth=1 00:14:38.456 norandommap=0 00:14:38.456 numjobs=1 00:14:38.456 00:14:38.456 verify_dump=1 00:14:38.456 verify_backlog=512 00:14:38.456 verify_state_save=0 00:14:38.456 do_verify=1 00:14:38.456 verify=crc32c-intel 00:14:38.456 [job0] 00:14:38.456 filename=/dev/nvme0n1 00:14:38.456 [job1] 00:14:38.456 filename=/dev/nvme0n2 00:14:38.456 [job2] 00:14:38.456 filename=/dev/nvme0n3 00:14:38.456 [job3] 00:14:38.456 filename=/dev/nvme0n4 00:14:38.456 Could not set queue depth (nvme0n1) 00:14:38.456 Could not set queue depth (nvme0n2) 00:14:38.456 Could not set queue depth (nvme0n3) 00:14:38.456 Could not set queue depth (nvme0n4) 00:14:38.456 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.456 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.456 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.456 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.456 fio-3.35 00:14:38.456 Starting 4 threads 00:14:39.832 00:14:39.832 job0: (groupid=0, jobs=1): err= 0: pid=3011695: Mon Jul 15 11:40:47 2024 00:14:39.832 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:14:39.832 slat (nsec): min=10348, max=33639, avg=16701.55, stdev=5492.12 00:14:39.832 clat (usec): min=309, max=41923, avg=39177.75, stdev=8684.48 00:14:39.832 lat (usec): min=326, max=41940, avg=39194.45, stdev=8684.38 00:14:39.832 clat percentiles (usec): 00:14:39.832 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:39.832 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:39.832 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:39.832 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:39.832 | 99.99th=[41681] 00:14:39.832 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:14:39.832 slat (nsec): min=10554, max=99654, avg=25485.67, stdev=12447.87 00:14:39.832 clat (usec): min=158, max=2178, avg=265.14, stdev=98.92 00:14:39.832 lat (usec): min=170, max=2200, avg=290.63, stdev=101.12 00:14:39.832 clat percentiles (usec): 00:14:39.832 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 198], 20.00th=[ 217], 00:14:39.832 | 30.00th=[ 233], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:14:39.832 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 355], 00:14:39.832 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 2180], 99.95th=[ 2180], 00:14:39.832 | 99.99th=[ 2180] 00:14:39.832 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.832 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.832 lat (usec) : 250=44.19%, 500=51.69% 00:14:39.832 lat (msec) : 4=0.19%, 50=3.93% 00:14:39.832 cpu : usr=0.69%, sys=1.28%, ctx=536, majf=0, minf=1 00:14:39.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.832 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.832 job1: (groupid=0, jobs=1): err= 0: pid=3011696: Mon Jul 15 11:40:47 2024 00:14:39.832 read: IOPS=521, BW=2087KiB/s (2137kB/s)(2112KiB/1012msec) 00:14:39.832 slat (nsec): min=6619, max=46191, avg=9750.54, stdev=4809.29 00:14:39.832 clat (usec): min=197, max=41126, avg=1437.21, stdev=6762.42 00:14:39.832 lat (usec): min=205, max=41141, avg=1446.96, stdev=6763.34 00:14:39.832 clat percentiles (usec): 00:14:39.832 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:14:39.832 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:14:39.832 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 469], 95.00th=[ 515], 00:14:39.832 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:39.832 | 99.99th=[41157] 00:14:39.832 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:14:39.832 slat (nsec): min=8233, max=98674, avg=18142.51, stdev=11721.45 00:14:39.832 clat (usec): min=131, max=472, avg=218.16, stdev=61.14 00:14:39.833 lat (usec): min=140, max=520, avg=236.31, stdev=68.50 00:14:39.833 clat percentiles (usec): 00:14:39.833 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:14:39.833 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 206], 60.00th=[ 223], 00:14:39.833 | 70.00th=[ 241], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 334], 00:14:39.833 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 474], 00:14:39.833 | 99.99th=[ 474] 00:14:39.833 bw ( KiB/s): min= 8192, max= 8192, per=81.68%, avg=8192.00, stdev= 0.00, samples=1 00:14:39.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:39.833 lat (usec) : 250=63.08%, 500=34.92%, 750=1.03% 00:14:39.833 lat (msec) : 50=0.97% 00:14:39.833 cpu : usr=1.28%, sys=2.96%, ctx=1552, majf=0, minf=2 00:14:39.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.833 job2: (groupid=0, jobs=1): err= 0: pid=3011697: Mon Jul 15 11:40:47 2024 00:14:39.833 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:14:39.833 slat (nsec): min=13170, max=27033, avg=16386.95, stdev=2918.53 00:14:39.833 clat (usec): min=40795, max=41984, avg=41075.38, stdev=305.95 00:14:39.833 lat (usec): min=40822, max=42000, avg=41091.77, stdev=305.20 00:14:39.833 clat percentiles (usec): 00:14:39.833 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:39.833 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:39.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:14:39.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:39.833 | 99.99th=[42206] 00:14:39.833 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:14:39.833 slat (usec): min=10, max=109, avg=27.20, stdev=11.92 00:14:39.833 clat (usec): min=168, max=497, avg=273.63, stdev=52.67 00:14:39.833 lat (usec): min=180, max=525, avg=300.83, stdev=58.93 00:14:39.833 clat percentiles (usec): 00:14:39.833 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 202], 20.00th=[ 227], 00:14:39.833 | 30.00th=[ 245], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:14:39.833 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 355], 00:14:39.833 | 99.00th=[ 400], 99.50th=[ 441], 99.90th=[ 498], 99.95th=[ 498], 00:14:39.833 | 99.99th=[ 498] 00:14:39.833 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.833 lat (usec) : 250=30.96%, 500=65.10% 00:14:39.833 lat (msec) : 50=3.94% 00:14:39.833 cpu : usr=1.08%, sys=1.37%, ctx=533, majf=0, minf=1 00:14:39.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.833 job3: (groupid=0, jobs=1): err= 0: pid=3011698: Mon Jul 15 11:40:47 2024 00:14:39.833 read: IOPS=60, BW=242KiB/s (248kB/s)(244KiB/1009msec) 00:14:39.833 slat (nsec): min=9923, max=55249, avg=13494.15, stdev=7234.13 00:14:39.833 clat (usec): min=267, max=41979, avg=13660.53, stdev=19270.72 00:14:39.833 lat (usec): min=278, max=41995, avg=13674.02, stdev=19273.23 00:14:39.833 clat percentiles (usec): 00:14:39.833 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 289], 00:14:39.833 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 334], 00:14:39.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:39.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:39.833 | 99.99th=[42206] 00:14:39.833 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:14:39.833 slat (usec): min=12, max=160, avg=31.11, stdev=19.43 00:14:39.833 clat (usec): min=163, max=525, avg=302.84, stdev=75.28 00:14:39.833 lat (usec): min=176, max=554, avg=333.94, stdev=85.24 00:14:39.833 clat percentiles (usec): 00:14:39.833 | 1.00th=[ 169], 5.00th=[ 190], 10.00th=[ 206], 20.00th=[ 233], 00:14:39.833 | 30.00th=[ 255], 40.00th=[ 277], 50.00th=[ 297], 60.00th=[ 322], 00:14:39.833 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 437], 00:14:39.833 | 99.00th=[ 478], 99.50th=[ 482], 99.90th=[ 529], 99.95th=[ 529], 00:14:39.833 | 99.99th=[ 529] 00:14:39.833 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.833 lat (usec) : 250=24.26%, 500=72.08%, 750=0.17% 00:14:39.833 lat (msec) : 50=3.49% 00:14:39.833 cpu : usr=0.69%, sys=1.79%, ctx=573, majf=0, minf=1 00:14:39.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.833 issued rwts: total=61,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.833 00:14:39.833 Run status group 0 (all jobs): 00:14:39.833 READ: bw=2476KiB/s (2535kB/s), 82.3KiB/s-2087KiB/s (84.2kB/s-2137kB/s), io=2528KiB (2589kB), run=1009-1021msec 00:14:39.833 WRITE: bw=9.79MiB/s (10.3MB/s), 2006KiB/s-4047KiB/s (2054kB/s-4145kB/s), io=10.0MiB (10.5MB), run=1009-1021msec 00:14:39.833 00:14:39.833 Disk stats (read/write): 00:14:39.833 nvme0n1: ios=43/512, merge=0/0, ticks=1645/125, in_queue=1770, util=97.29% 00:14:39.833 nvme0n2: ios=527/1024, merge=0/0, ticks=553/210, in_queue=763, util=86.05% 00:14:39.833 nvme0n3: ios=41/512, merge=0/0, ticks=847/126, in_queue=973, util=89.92% 00:14:39.833 nvme0n4: ios=56/512, merge=0/0, ticks=628/142, in_queue=770, util=89.61% 00:14:39.833 11:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:39.833 [global] 00:14:39.833 thread=1 00:14:39.833 invalidate=1 00:14:39.833 rw=randwrite 00:14:39.833 time_based=1 00:14:39.833 runtime=1 00:14:39.833 ioengine=libaio 00:14:39.833 direct=1 00:14:39.833 bs=4096 00:14:39.833 iodepth=1 00:14:39.833 norandommap=0 00:14:39.833 numjobs=1 00:14:39.833 00:14:39.833 verify_dump=1 00:14:39.833 verify_backlog=512 00:14:39.833 verify_state_save=0 00:14:39.833 do_verify=1 00:14:39.833 verify=crc32c-intel 00:14:39.833 [job0] 00:14:39.833 filename=/dev/nvme0n1 00:14:39.833 [job1] 00:14:39.833 filename=/dev/nvme0n2 00:14:39.833 [job2] 00:14:39.833 filename=/dev/nvme0n3 00:14:39.833 [job3] 00:14:39.833 filename=/dev/nvme0n4 00:14:39.833 Could not set queue depth (nvme0n1) 00:14:39.833 Could not set queue depth (nvme0n2) 00:14:39.833 Could not set queue depth (nvme0n3) 00:14:39.833 Could not set queue depth (nvme0n4) 00:14:40.091 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.091 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.091 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.091 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.091 fio-3.35 00:14:40.091 Starting 4 threads 00:14:41.468 00:14:41.468 job0: (groupid=0, jobs=1): err= 0: pid=3011926: Mon Jul 15 11:40:49 2024 00:14:41.468 read: IOPS=1935, BW=7740KiB/s (7926kB/s)(7748KiB/1001msec) 00:14:41.468 slat (nsec): min=5888, max=54289, avg=8567.27, stdev=3803.79 00:14:41.468 clat (usec): min=186, max=597, avg=275.88, stdev=61.63 00:14:41.468 lat (usec): min=193, max=613, avg=284.45, stdev=63.14 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 223], 00:14:41.468 | 30.00th=[ 235], 40.00th=[ 249], 50.00th=[ 269], 60.00th=[ 277], 00:14:41.468 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 375], 00:14:41.468 | 99.00th=[ 498], 99.50th=[ 515], 99.90th=[ 586], 99.95th=[ 594], 00:14:41.468 | 99.99th=[ 594] 00:14:41.468 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:41.468 slat (nsec): min=7398, max=60197, avg=10959.58, stdev=5492.78 00:14:41.468 clat (usec): min=135, max=1135, avg=202.46, stdev=45.97 00:14:41.468 lat (usec): min=144, max=1143, avg=213.42, stdev=47.42 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 172], 00:14:41.468 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:14:41.468 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 265], 95.00th=[ 289], 00:14:41.468 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 404], 99.95th=[ 578], 00:14:41.468 | 99.99th=[ 1139] 00:14:41.468 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=1 00:14:41.468 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:41.468 lat (usec) : 250=64.87%, 500=34.68%, 750=0.43% 00:14:41.468 lat (msec) : 2=0.03% 00:14:41.468 cpu : usr=3.20%, sys=5.30%, ctx=3985, majf=0, minf=1 00:14:41.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 issued rwts: total=1937,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:41.468 job1: (groupid=0, jobs=1): err= 0: pid=3011935: Mon Jul 15 11:40:49 2024 00:14:41.468 read: IOPS=1662, BW=6649KiB/s (6809kB/s)(6656KiB/1001msec) 00:14:41.468 slat (nsec): min=4812, max=30051, avg=8226.90, stdev=3509.91 00:14:41.468 clat (usec): min=204, max=40709, avg=340.47, stdev=993.10 00:14:41.468 lat (usec): min=210, max=40723, avg=348.69, stdev=993.33 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 249], 00:14:41.468 | 30.00th=[ 262], 40.00th=[ 277], 50.00th=[ 302], 60.00th=[ 326], 00:14:41.468 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 457], 00:14:41.468 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 660], 99.95th=[40633], 00:14:41.468 | 99.99th=[40633] 00:14:41.468 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:41.468 slat (nsec): min=5919, max=54851, avg=8480.04, stdev=3001.88 00:14:41.468 clat (usec): min=139, max=397, avg=192.21, stdev=30.74 00:14:41.468 lat (usec): min=146, max=407, avg=200.69, stdev=31.62 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:14:41.468 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:14:41.468 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 233], 95.00th=[ 247], 00:14:41.468 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 383], 00:14:41.468 | 99.99th=[ 396] 00:14:41.468 bw ( KiB/s): min= 8175, max= 8175, per=31.66%, avg=8175.00, stdev= 0.00, samples=1 00:14:41.468 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:14:41.468 lat (usec) : 250=62.28%, 500=36.31%, 750=1.37% 00:14:41.468 lat (msec) : 50=0.03% 00:14:41.468 cpu : usr=1.80%, sys=3.90%, ctx=3713, majf=0, minf=2 00:14:41.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 issued rwts: total=1664,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:41.468 job2: (groupid=0, jobs=1): err= 0: pid=3011963: Mon Jul 15 11:40:49 2024 00:14:41.468 read: IOPS=157, BW=629KiB/s (644kB/s)(648KiB/1031msec) 00:14:41.468 slat (nsec): min=7896, max=31456, avg=9682.41, stdev=2640.64 00:14:41.468 clat (usec): min=256, max=41337, avg=5573.95, stdev=13684.33 00:14:41.468 lat (usec): min=267, max=41346, avg=5583.63, stdev=13685.03 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 293], 00:14:41.468 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:14:41.468 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[40633], 95.00th=[41157], 00:14:41.468 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:41.468 | 99.99th=[41157] 00:14:41.468 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:14:41.468 slat (nsec): min=7779, max=52919, avg=10768.06, stdev=3582.04 00:14:41.468 clat (usec): min=147, max=428, avg=231.63, stdev=56.15 00:14:41.468 lat (usec): min=156, max=439, avg=242.40, stdev=56.43 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 186], 00:14:41.468 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 223], 60.00th=[ 233], 00:14:41.468 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 306], 95.00th=[ 383], 00:14:41.468 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 429], 99.95th=[ 429], 00:14:41.468 | 99.99th=[ 429] 00:14:41.468 bw ( KiB/s): min= 4087, max= 4087, per=15.83%, avg=4087.00, stdev= 0.00, samples=1 00:14:41.468 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:41.468 lat (usec) : 250=56.97%, 500=39.91% 00:14:41.468 lat (msec) : 50=3.12% 00:14:41.468 cpu : usr=0.10%, sys=0.87%, ctx=674, majf=0, minf=1 00:14:41.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 issued rwts: total=162,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:41.468 job3: (groupid=0, jobs=1): err= 0: pid=3011976: Mon Jul 15 11:40:49 2024 00:14:41.468 read: IOPS=1925, BW=7700KiB/s (7885kB/s)(7708KiB/1001msec) 00:14:41.468 slat (nsec): min=6054, max=36158, avg=8483.52, stdev=3618.24 00:14:41.468 clat (usec): min=212, max=664, avg=282.93, stdev=85.64 00:14:41.468 lat (usec): min=218, max=686, avg=291.42, stdev=87.51 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:14:41.468 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:14:41.468 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 424], 95.00th=[ 490], 00:14:41.468 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 668], 99.95th=[ 668], 00:14:41.468 | 99.99th=[ 668] 00:14:41.468 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:41.468 slat (nsec): min=7235, max=60097, avg=11681.34, stdev=6907.58 00:14:41.468 clat (usec): min=143, max=506, avg=196.76, stdev=60.28 00:14:41.468 lat (usec): min=151, max=531, avg=208.44, stdev=66.17 00:14:41.468 clat percentiles (usec): 00:14:41.468 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:14:41.468 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:14:41.468 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 297], 95.00th=[ 359], 00:14:41.468 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 453], 99.95th=[ 461], 00:14:41.468 | 99.99th=[ 506] 00:14:41.468 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=1 00:14:41.468 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:41.468 lat (usec) : 250=68.05%, 500=29.84%, 750=2.11% 00:14:41.468 cpu : usr=3.00%, sys=5.50%, ctx=3975, majf=0, minf=1 00:14:41.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.468 issued rwts: total=1927,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:41.468 00:14:41.468 Run status group 0 (all jobs): 00:14:41.468 READ: bw=21.6MiB/s (22.6MB/s), 629KiB/s-7740KiB/s (644kB/s-7926kB/s), io=22.2MiB (23.3MB), run=1001-1031msec 00:14:41.468 WRITE: bw=25.2MiB/s (26.4MB/s), 1986KiB/s-8184KiB/s (2034kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1031msec 00:14:41.468 00:14:41.468 Disk stats (read/write): 00:14:41.468 nvme0n1: ios=1586/1899, merge=0/0, ticks=434/362, in_queue=796, util=86.37% 00:14:41.468 nvme0n2: ios=1498/1536, merge=0/0, ticks=517/298, in_queue=815, util=86.56% 00:14:41.468 nvme0n3: ios=154/512, merge=0/0, ticks=696/116, in_queue=812, util=88.75% 00:14:41.468 nvme0n4: ios=1536/1738, merge=0/0, ticks=434/330, in_queue=764, util=89.50% 00:14:41.468 11:40:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:41.468 [global] 00:14:41.468 thread=1 00:14:41.468 invalidate=1 00:14:41.468 rw=write 00:14:41.468 time_based=1 00:14:41.468 runtime=1 00:14:41.469 ioengine=libaio 00:14:41.469 direct=1 00:14:41.469 bs=4096 00:14:41.469 iodepth=128 00:14:41.469 norandommap=0 00:14:41.469 numjobs=1 00:14:41.469 00:14:41.469 verify_dump=1 00:14:41.469 verify_backlog=512 00:14:41.469 verify_state_save=0 00:14:41.469 do_verify=1 00:14:41.469 verify=crc32c-intel 00:14:41.469 [job0] 00:14:41.469 filename=/dev/nvme0n1 00:14:41.469 [job1] 00:14:41.469 filename=/dev/nvme0n2 00:14:41.469 [job2] 00:14:41.469 filename=/dev/nvme0n3 00:14:41.469 [job3] 00:14:41.469 filename=/dev/nvme0n4 00:14:41.469 Could not set queue depth (nvme0n1) 00:14:41.469 Could not set queue depth (nvme0n2) 00:14:41.469 Could not set queue depth (nvme0n3) 00:14:41.469 Could not set queue depth (nvme0n4) 00:14:41.469 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.469 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.469 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.469 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.469 fio-3.35 00:14:41.469 Starting 4 threads 00:14:42.857 00:14:42.857 job0: (groupid=0, jobs=1): err= 0: pid=3012280: Mon Jul 15 11:40:50 2024 00:14:42.857 read: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(12.5MiB/1047msec) 00:14:42.857 slat (usec): min=2, max=14163, avg=117.79, stdev=850.28 00:14:42.857 clat (usec): min=736, max=96272, avg=16634.60, stdev=13439.76 00:14:42.857 lat (usec): min=742, max=96277, avg=16752.40, stdev=13519.53 00:14:42.857 clat percentiles (usec): 00:14:42.857 | 1.00th=[ 1827], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 9241], 00:14:42.857 | 30.00th=[10159], 40.00th=[11076], 50.00th=[12125], 60.00th=[14091], 00:14:42.857 | 70.00th=[19268], 80.00th=[21890], 90.00th=[25822], 95.00th=[29754], 00:14:42.857 | 99.00th=[95945], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:14:42.857 | 99.99th=[95945] 00:14:42.857 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:14:42.857 slat (usec): min=3, max=42927, avg=159.92, stdev=1555.40 00:14:42.857 clat (msec): min=2, max=168, avg=17.81, stdev=14.29 00:14:42.857 lat (msec): min=2, max=168, avg=17.97, stdev=14.55 00:14:42.857 clat percentiles (msec): 00:14:42.857 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:14:42.857 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 18], 00:14:42.857 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 26], 95.00th=[ 39], 00:14:42.857 | 99.00th=[ 64], 99.50th=[ 95], 99.90th=[ 169], 99.95th=[ 169], 00:14:42.857 | 99.99th=[ 169] 00:14:42.857 bw ( KiB/s): min=12288, max=16376, per=24.57%, avg=14332.00, stdev=2890.65, samples=2 00:14:42.857 iops : min= 3072, max= 4094, avg=3583.00, stdev=722.66, samples=2 00:14:42.857 lat (usec) : 750=0.07%, 1000=0.22% 00:14:42.857 lat (msec) : 2=0.31%, 4=1.00%, 10=18.59%, 20=51.23%, 50=24.64% 00:14:42.857 lat (msec) : 100=3.70%, 250=0.24% 00:14:42.857 cpu : usr=2.96%, sys=7.07%, ctx=269, majf=0, minf=13 00:14:42.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:42.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.857 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.857 job1: (groupid=0, jobs=1): err= 0: pid=3012281: Mon Jul 15 11:40:50 2024 00:14:42.857 read: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1006msec) 00:14:42.857 slat (usec): min=2, max=13296, avg=102.40, stdev=676.45 00:14:42.857 clat (usec): min=2812, max=36739, avg=12810.98, stdev=4270.10 00:14:42.857 lat (usec): min=2819, max=40703, avg=12913.38, stdev=4335.59 00:14:42.857 clat percentiles (usec): 00:14:42.857 | 1.00th=[ 5604], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10683], 00:14:42.857 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:14:42.857 | 70.00th=[13042], 80.00th=[14091], 90.00th=[16909], 95.00th=[21103], 00:14:42.857 | 99.00th=[30016], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:14:42.857 | 99.99th=[36963] 00:14:42.857 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:14:42.857 slat (usec): min=4, max=9456, avg=108.09, stdev=564.77 00:14:42.857 clat (usec): min=3087, max=39311, avg=14972.84, stdev=7631.45 00:14:42.857 lat (usec): min=3094, max=39319, avg=15080.93, stdev=7685.96 00:14:42.857 clat percentiles (usec): 00:14:42.857 | 1.00th=[ 4621], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:14:42.857 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[12387], 00:14:42.857 | 70.00th=[16450], 80.00th=[22938], 90.00th=[27132], 95.00th=[30278], 00:14:42.857 | 99.00th=[36439], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:14:42.857 | 99.99th=[39060] 00:14:42.857 bw ( KiB/s): min=16384, max=20480, per=31.59%, avg=18432.00, stdev=2896.31, samples=2 00:14:42.857 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:42.857 lat (msec) : 4=0.26%, 10=23.69%, 20=60.98%, 50=15.07% 00:14:42.858 cpu : usr=4.68%, sys=7.56%, ctx=408, majf=0, minf=13 00:14:42.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.858 issued rwts: total=4553,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.858 job2: (groupid=0, jobs=1): err= 0: pid=3012282: Mon Jul 15 11:40:50 2024 00:14:42.858 read: IOPS=2720, BW=10.6MiB/s (11.1MB/s)(11.1MiB/1047msec) 00:14:42.858 slat (usec): min=2, max=40331, avg=180.53, stdev=1352.65 00:14:42.858 clat (usec): min=533, max=136553, avg=25792.61, stdev=25150.26 00:14:42.858 lat (usec): min=562, max=136559, avg=25973.14, stdev=25272.68 00:14:42.858 clat percentiles (usec): 00:14:42.858 | 1.00th=[ 1532], 5.00th=[ 10683], 10.00th=[ 11469], 20.00th=[ 12387], 00:14:42.858 | 30.00th=[ 12780], 40.00th=[ 13435], 50.00th=[ 14222], 60.00th=[ 16712], 00:14:42.858 | 70.00th=[ 21890], 80.00th=[ 32900], 90.00th=[ 62653], 95.00th=[ 85459], 00:14:42.858 | 99.00th=[135267], 99.50th=[137364], 99.90th=[137364], 99.95th=[137364], 00:14:42.858 | 99.99th=[137364] 00:14:42.858 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:14:42.858 slat (usec): min=4, max=20697, avg=145.44, stdev=940.30 00:14:42.858 clat (usec): min=8145, max=67258, avg=19275.79, stdev=13097.55 00:14:42.858 lat (usec): min=8191, max=67266, avg=19421.23, stdev=13185.16 00:14:42.858 clat percentiles (usec): 00:14:42.858 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11863], 00:14:42.858 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13435], 00:14:42.858 | 70.00th=[22152], 80.00th=[26608], 90.00th=[35914], 95.00th=[53216], 00:14:42.858 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:14:42.858 | 99.99th=[67634] 00:14:42.858 bw ( KiB/s): min= 7584, max=16992, per=21.06%, avg=12288.00, stdev=6652.46, samples=2 00:14:42.858 iops : min= 1896, max= 4248, avg=3072.00, stdev=1663.12, samples=2 00:14:42.858 lat (usec) : 750=0.02%, 1000=0.02% 00:14:42.858 lat (msec) : 2=0.46%, 10=4.26%, 20=62.64%, 50=22.55%, 100=9.02% 00:14:42.858 lat (msec) : 250=1.05% 00:14:42.858 cpu : usr=2.68%, sys=6.88%, ctx=266, majf=0, minf=13 00:14:42.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.858 issued rwts: total=2848,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.858 job3: (groupid=0, jobs=1): err= 0: pid=3012283: Mon Jul 15 11:40:50 2024 00:14:42.858 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:14:42.858 slat (usec): min=3, max=22522, avg=128.02, stdev=765.52 00:14:42.858 clat (usec): min=8368, max=53095, avg=16384.35, stdev=7960.15 00:14:42.858 lat (usec): min=8377, max=63522, avg=16512.37, stdev=8023.58 00:14:42.858 clat percentiles (usec): 00:14:42.858 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:14:42.858 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:14:42.858 | 70.00th=[17433], 80.00th=[21890], 90.00th=[24773], 95.00th=[30016], 00:14:42.858 | 99.00th=[47449], 99.50th=[49546], 99.90th=[53216], 99.95th=[53216], 00:14:42.858 | 99.99th=[53216] 00:14:42.858 write: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1003msec); 0 zone resets 00:14:42.858 slat (usec): min=5, max=24800, avg=125.51, stdev=847.59 00:14:42.858 clat (usec): min=421, max=69449, avg=16535.06, stdev=8617.85 00:14:42.858 lat (usec): min=3443, max=69475, avg=16660.57, stdev=8680.04 00:14:42.858 clat percentiles (usec): 00:14:42.858 | 1.00th=[ 3884], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:14:42.858 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[16188], 00:14:42.858 | 70.00th=[19006], 80.00th=[22152], 90.00th=[23462], 95.00th=[27657], 00:14:42.858 | 99.00th=[64226], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:14:42.858 | 99.99th=[69731] 00:14:42.858 bw ( KiB/s): min=12288, max=18744, per=26.59%, avg=15516.00, stdev=4565.08, samples=2 00:14:42.858 iops : min= 3072, max= 4686, avg=3879.00, stdev=1141.27, samples=2 00:14:42.858 lat (usec) : 500=0.01% 00:14:42.858 lat (msec) : 4=0.55%, 10=2.41%, 20=71.41%, 50=24.36%, 100=1.25% 00:14:42.858 cpu : usr=4.69%, sys=8.68%, ctx=361, majf=0, minf=11 00:14:42.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.858 issued rwts: total=3584,4007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.858 00:14:42.858 Run status group 0 (all jobs): 00:14:42.858 READ: bw=52.9MiB/s (55.5MB/s), 10.6MiB/s-17.7MiB/s (11.1MB/s-18.5MB/s), io=55.4MiB (58.1MB), run=1003-1047msec 00:14:42.858 WRITE: bw=57.0MiB/s (59.7MB/s), 11.5MiB/s-17.9MiB/s (12.0MB/s-18.8MB/s), io=59.7MiB (62.6MB), run=1003-1047msec 00:14:42.858 00:14:42.858 Disk stats (read/write): 00:14:42.858 nvme0n1: ios=2922/3072, merge=0/0, ticks=22611/19168, in_queue=41779, util=99.80% 00:14:42.858 nvme0n2: ios=3634/4055, merge=0/0, ticks=33587/47893, in_queue=81480, util=97.97% 00:14:42.858 nvme0n3: ios=2610/3072, merge=0/0, ticks=15957/20219, in_queue=36176, util=98.33% 00:14:42.858 nvme0n4: ios=2868/3072, merge=0/0, ticks=16709/16536, in_queue=33245, util=96.63% 00:14:42.858 11:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:42.858 [global] 00:14:42.858 thread=1 00:14:42.858 invalidate=1 00:14:42.858 rw=randwrite 00:14:42.858 time_based=1 00:14:42.858 runtime=1 00:14:42.858 ioengine=libaio 00:14:42.858 direct=1 00:14:42.858 bs=4096 00:14:42.858 iodepth=128 00:14:42.858 norandommap=0 00:14:42.858 numjobs=1 00:14:42.858 00:14:42.858 verify_dump=1 00:14:42.858 verify_backlog=512 00:14:42.858 verify_state_save=0 00:14:42.858 do_verify=1 00:14:42.858 verify=crc32c-intel 00:14:42.858 [job0] 00:14:42.858 filename=/dev/nvme0n1 00:14:42.858 [job1] 00:14:42.858 filename=/dev/nvme0n2 00:14:42.858 [job2] 00:14:42.858 filename=/dev/nvme0n3 00:14:42.858 [job3] 00:14:42.858 filename=/dev/nvme0n4 00:14:42.858 Could not set queue depth (nvme0n1) 00:14:42.858 Could not set queue depth (nvme0n2) 00:14:42.858 Could not set queue depth (nvme0n3) 00:14:42.858 Could not set queue depth (nvme0n4) 00:14:43.118 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.118 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.118 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.118 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.118 fio-3.35 00:14:43.118 Starting 4 threads 00:14:44.136 00:14:44.136 job0: (groupid=0, jobs=1): err= 0: pid=3012509: Mon Jul 15 11:40:52 2024 00:14:44.136 read: IOPS=5018, BW=19.6MiB/s (20.6MB/s)(19.6MiB/1002msec) 00:14:44.136 slat (usec): min=3, max=12937, avg=88.95, stdev=514.03 00:14:44.136 clat (usec): min=877, max=32740, avg=11850.80, stdev=3525.81 00:14:44.136 lat (usec): min=2318, max=45666, avg=11939.75, stdev=3561.88 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 6063], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10159], 00:14:44.136 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:14:44.136 | 70.00th=[11731], 80.00th=[12125], 90.00th=[15008], 95.00th=[20579], 00:14:44.136 | 99.00th=[28443], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:14:44.136 | 99.99th=[32637] 00:14:44.136 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:14:44.136 slat (usec): min=4, max=9830, avg=98.57, stdev=564.78 00:14:44.136 clat (usec): min=6027, max=31938, avg=13009.45, stdev=5063.58 00:14:44.136 lat (usec): min=6033, max=31960, avg=13108.01, stdev=5108.23 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10159], 00:14:44.136 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[11076], 00:14:44.136 | 70.00th=[11731], 80.00th=[16712], 90.00th=[22414], 95.00th=[24249], 00:14:44.136 | 99.00th=[28967], 99.50th=[30802], 99.90th=[31851], 99.95th=[31851], 00:14:44.136 | 99.99th=[31851] 00:14:44.136 bw ( KiB/s): min=16384, max=24576, per=31.45%, avg=20480.00, stdev=5792.62, samples=2 00:14:44.136 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:14:44.136 lat (usec) : 1000=0.01% 00:14:44.136 lat (msec) : 4=0.24%, 10=16.71%, 20=73.22%, 50=9.82% 00:14:44.136 cpu : usr=4.90%, sys=9.99%, ctx=407, majf=0, minf=13 00:14:44.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:44.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.136 issued rwts: total=5029,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.136 job1: (groupid=0, jobs=1): err= 0: pid=3012510: Mon Jul 15 11:40:52 2024 00:14:44.136 read: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec) 00:14:44.136 slat (usec): min=3, max=20580, avg=121.34, stdev=766.09 00:14:44.136 clat (usec): min=2741, max=33542, avg=15192.94, stdev=3106.60 00:14:44.136 lat (usec): min=7851, max=33601, avg=15314.28, stdev=3168.84 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 8291], 5.00th=[11863], 10.00th=[12387], 20.00th=[13304], 00:14:44.136 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:14:44.136 | 70.00th=[15533], 80.00th=[16581], 90.00th=[18482], 95.00th=[21627], 00:14:44.136 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:14:44.136 | 99.99th=[33424] 00:14:44.136 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:14:44.136 slat (usec): min=5, max=8212, avg=116.58, stdev=650.28 00:14:44.136 clat (usec): min=6141, max=36081, avg=16187.28, stdev=5632.68 00:14:44.136 lat (usec): min=6183, max=36115, avg=16303.86, stdev=5681.35 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 7963], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:14:44.136 | 30.00th=[12649], 40.00th=[13435], 50.00th=[13960], 60.00th=[15270], 00:14:44.136 | 70.00th=[17433], 80.00th=[22676], 90.00th=[24511], 95.00th=[27919], 00:14:44.136 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:14:44.136 | 99.99th=[35914] 00:14:44.136 bw ( KiB/s): min=14240, max=18528, per=25.16%, avg=16384.00, stdev=3032.07, samples=2 00:14:44.136 iops : min= 3560, max= 4632, avg=4096.00, stdev=758.02, samples=2 00:14:44.136 lat (msec) : 4=0.01%, 10=3.25%, 20=80.64%, 50=16.10% 00:14:44.136 cpu : usr=4.57%, sys=7.65%, ctx=307, majf=0, minf=11 00:14:44.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:44.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.136 issued rwts: total=4023,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.136 job2: (groupid=0, jobs=1): err= 0: pid=3012511: Mon Jul 15 11:40:52 2024 00:14:44.136 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:14:44.136 slat (usec): min=2, max=13198, avg=180.94, stdev=1040.20 00:14:44.136 clat (usec): min=2775, max=52517, avg=22699.50, stdev=9641.76 00:14:44.136 lat (usec): min=2793, max=52536, avg=22880.43, stdev=9733.06 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 5866], 5.00th=[ 9634], 10.00th=[12125], 20.00th=[14746], 00:14:44.136 | 30.00th=[16319], 40.00th=[17957], 50.00th=[19530], 60.00th=[21890], 00:14:44.136 | 70.00th=[27657], 80.00th=[33817], 90.00th=[36963], 95.00th=[40109], 00:14:44.136 | 99.00th=[44827], 99.50th=[46400], 99.90th=[47449], 99.95th=[49546], 00:14:44.136 | 99.99th=[52691] 00:14:44.136 write: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1007msec); 0 zone resets 00:14:44.136 slat (usec): min=3, max=12204, avg=168.18, stdev=725.02 00:14:44.136 clat (usec): min=1590, max=56534, avg=22841.28, stdev=12530.81 00:14:44.136 lat (usec): min=1621, max=56581, avg=23009.46, stdev=12609.17 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 6128], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[12256], 00:14:44.136 | 30.00th=[12911], 40.00th=[14877], 50.00th=[18220], 60.00th=[22676], 00:14:44.136 | 70.00th=[26608], 80.00th=[38011], 90.00th=[42730], 95.00th=[45876], 00:14:44.136 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:14:44.136 | 99.99th=[56361] 00:14:44.136 bw ( KiB/s): min= 9856, max=13392, per=17.85%, avg=11624.00, stdev=2500.33, samples=2 00:14:44.136 iops : min= 2464, max= 3348, avg=2906.00, stdev=625.08, samples=2 00:14:44.136 lat (msec) : 2=0.18%, 4=0.07%, 10=6.20%, 20=45.48%, 50=47.23% 00:14:44.136 lat (msec) : 100=0.84% 00:14:44.136 cpu : usr=2.78%, sys=5.77%, ctx=321, majf=0, minf=15 00:14:44.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:44.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.136 issued rwts: total=2560,3034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.136 job3: (groupid=0, jobs=1): err= 0: pid=3012512: Mon Jul 15 11:40:52 2024 00:14:44.136 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:14:44.136 slat (usec): min=2, max=43877, avg=133.58, stdev=1121.62 00:14:44.136 clat (usec): min=6524, max=55233, avg=16694.97, stdev=10049.17 00:14:44.136 lat (usec): min=6530, max=55240, avg=16828.54, stdev=10080.96 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[11600], 20.00th=[12518], 00:14:44.136 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13435], 00:14:44.136 | 70.00th=[14353], 80.00th=[16057], 90.00th=[23987], 95.00th=[53216], 00:14:44.136 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:14:44.136 | 99.99th=[55313] 00:14:44.136 write: IOPS=4140, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec); 0 zone resets 00:14:44.136 slat (usec): min=3, max=8152, avg=101.17, stdev=509.41 00:14:44.136 clat (usec): min=286, max=55096, avg=13950.13, stdev=4781.36 00:14:44.136 lat (usec): min=2749, max=55103, avg=14051.30, stdev=4773.05 00:14:44.136 clat percentiles (usec): 00:14:44.136 | 1.00th=[ 6390], 5.00th=[10028], 10.00th=[11076], 20.00th=[12125], 00:14:44.136 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:14:44.136 | 70.00th=[13698], 80.00th=[15270], 90.00th=[18482], 95.00th=[20841], 00:14:44.136 | 99.00th=[30540], 99.50th=[52167], 99.90th=[55313], 99.95th=[55313], 00:14:44.136 | 99.99th=[55313] 00:14:44.136 bw ( KiB/s): min=16136, max=16136, per=24.78%, avg=16136.00, stdev= 0.00, samples=1 00:14:44.136 iops : min= 4034, max= 4034, avg=4034.00, stdev= 0.00, samples=1 00:14:44.136 lat (usec) : 500=0.01% 00:14:44.136 lat (msec) : 4=0.39%, 10=2.78%, 20=86.49%, 50=7.24%, 100=3.08% 00:14:44.136 cpu : usr=3.40%, sys=6.00%, ctx=407, majf=0, minf=11 00:14:44.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:44.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.137 issued rwts: total=4096,4145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.137 00:14:44.137 Run status group 0 (all jobs): 00:14:44.137 READ: bw=60.9MiB/s (63.9MB/s), 9.93MiB/s-19.6MiB/s (10.4MB/s-20.6MB/s), io=61.4MiB (64.3MB), run=1001-1007msec 00:14:44.137 WRITE: bw=63.6MiB/s (66.7MB/s), 11.8MiB/s-20.0MiB/s (12.3MB/s-20.9MB/s), io=64.0MiB (67.2MB), run=1001-1007msec 00:14:44.137 00:14:44.137 Disk stats (read/write): 00:14:44.137 nvme0n1: ios=4126/4202, merge=0/0, ticks=17372/18180, in_queue=35552, util=97.90% 00:14:44.137 nvme0n2: ios=3551/3584, merge=0/0, ticks=27013/23919, in_queue=50932, util=98.78% 00:14:44.137 nvme0n3: ios=2188/2560, merge=0/0, ticks=19391/20549, in_queue=39940, util=97.49% 00:14:44.137 nvme0n4: ios=3445/3584, merge=0/0, ticks=14823/11946, in_queue=26769, util=98.21% 00:14:44.137 11:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:44.137 11:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3012650 00:14:44.137 11:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:44.137 11:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:44.137 [global] 00:14:44.137 thread=1 00:14:44.137 invalidate=1 00:14:44.137 rw=read 00:14:44.137 time_based=1 00:14:44.137 runtime=10 00:14:44.137 ioengine=libaio 00:14:44.137 direct=1 00:14:44.137 bs=4096 00:14:44.137 iodepth=1 00:14:44.137 norandommap=1 00:14:44.137 numjobs=1 00:14:44.137 00:14:44.137 [job0] 00:14:44.137 filename=/dev/nvme0n1 00:14:44.137 [job1] 00:14:44.137 filename=/dev/nvme0n2 00:14:44.137 [job2] 00:14:44.137 filename=/dev/nvme0n3 00:14:44.137 [job3] 00:14:44.137 filename=/dev/nvme0n4 00:14:44.137 Could not set queue depth (nvme0n1) 00:14:44.137 Could not set queue depth (nvme0n2) 00:14:44.137 Could not set queue depth (nvme0n3) 00:14:44.137 Could not set queue depth (nvme0n4) 00:14:44.396 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.396 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.396 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.396 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.396 fio-3.35 00:14:44.396 Starting 4 threads 00:14:47.677 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:47.677 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:47.677 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=34484224, buflen=4096 00:14:47.677 fio: pid=3012748, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:47.677 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.677 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:47.677 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=16019456, buflen=4096 00:14:47.677 fio: pid=3012741, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:47.934 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.934 11:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:47.934 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2916352, buflen=4096 00:14:47.934 fio: pid=3012739, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.193 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3862528, buflen=4096 00:14:48.193 fio: pid=3012740, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.193 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.193 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:48.193 00:14:48.193 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3012739: Mon Jul 15 11:40:56 2024 00:14:48.193 read: IOPS=205, BW=821KiB/s (841kB/s)(2848KiB/3467msec) 00:14:48.193 slat (usec): min=5, max=29929, avg=62.66, stdev=1158.03 00:14:48.193 clat (usec): min=205, max=41994, avg=4772.05, stdev=12807.74 00:14:48.193 lat (usec): min=214, max=71092, avg=4834.79, stdev=13012.10 00:14:48.193 clat percentiles (usec): 00:14:48.193 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:14:48.193 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 265], 00:14:48.193 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[41157], 95.00th=[41157], 00:14:48.193 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:48.193 | 99.99th=[42206] 00:14:48.193 bw ( KiB/s): min= 96, max= 3912, per=6.22%, avg=936.00, stdev=1532.96, samples=6 00:14:48.193 iops : min= 24, max= 978, avg=234.00, stdev=383.24, samples=6 00:14:48.193 lat (usec) : 250=49.79%, 500=38.99% 00:14:48.193 lat (msec) : 50=11.08% 00:14:48.193 cpu : usr=0.14%, sys=0.23%, ctx=716, majf=0, minf=1 00:14:48.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.193 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.193 issued rwts: total=713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.193 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3012740: Mon Jul 15 11:40:56 2024 00:14:48.193 read: IOPS=253, BW=1015KiB/s (1039kB/s)(3772KiB/3718msec) 00:14:48.193 slat (usec): min=6, max=31930, avg=71.96, stdev=1232.80 00:14:48.193 clat (usec): min=181, max=58246, avg=3844.55, stdev=11594.85 00:14:48.193 lat (usec): min=187, max=59068, avg=3916.58, stdev=11747.53 00:14:48.193 clat percentiles (usec): 00:14:48.193 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:14:48.193 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 253], 00:14:48.193 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 461], 95.00th=[41157], 00:14:48.193 | 99.00th=[41157], 99.50th=[42206], 99.90th=[58459], 99.95th=[58459], 00:14:48.193 | 99.99th=[58459] 00:14:48.193 bw ( KiB/s): min= 96, max= 3404, per=3.81%, avg=573.14, stdev=1248.30, samples=7 00:14:48.193 iops : min= 24, max= 851, avg=143.29, stdev=312.08, samples=7 00:14:48.194 lat (usec) : 250=58.47%, 500=31.67%, 750=0.42%, 1000=0.42% 00:14:48.194 lat (msec) : 20=0.11%, 50=8.69%, 100=0.11% 00:14:48.194 cpu : usr=0.13%, sys=0.30%, ctx=948, majf=0, minf=1 00:14:48.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.194 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3012741: Mon Jul 15 11:40:56 2024 00:14:48.194 read: IOPS=1227, BW=4907KiB/s (5025kB/s)(15.3MiB/3188msec) 00:14:48.194 slat (nsec): min=5738, max=53147, avg=9536.08, stdev=4705.14 00:14:48.194 clat (usec): min=180, max=41056, avg=797.24, stdev=4658.88 00:14:48.194 lat (usec): min=187, max=41089, avg=806.77, stdev=4660.24 00:14:48.194 clat percentiles (usec): 00:14:48.194 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 217], 00:14:48.194 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:14:48.194 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 375], 00:14:48.194 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:48.194 | 99.99th=[41157] 00:14:48.194 bw ( KiB/s): min= 96, max=11304, per=28.08%, avg=4225.33, stdev=3924.85, samples=6 00:14:48.194 iops : min= 24, max= 2826, avg=1056.33, stdev=981.21, samples=6 00:14:48.194 lat (usec) : 250=54.91%, 500=43.35%, 750=0.38% 00:14:48.194 lat (msec) : 50=1.33% 00:14:48.194 cpu : usr=0.75%, sys=1.76%, ctx=3913, majf=0, minf=1 00:14:48.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 issued rwts: total=3912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.194 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3012748: Mon Jul 15 11:40:56 2024 00:14:48.194 read: IOPS=2892, BW=11.3MiB/s (11.8MB/s)(32.9MiB/2911msec) 00:14:48.194 slat (nsec): min=5206, max=69911, avg=14917.60, stdev=9959.87 00:14:48.194 clat (usec): min=190, max=42406, avg=324.58, stdev=1201.43 00:14:48.194 lat (usec): min=197, max=42422, avg=339.50, stdev=1201.90 00:14:48.194 clat percentiles (usec): 00:14:48.194 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:14:48.194 | 30.00th=[ 233], 40.00th=[ 245], 50.00th=[ 273], 60.00th=[ 289], 00:14:48.194 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 400], 95.00th=[ 420], 00:14:48.194 | 99.00th=[ 515], 99.50th=[ 553], 99.90th=[ 766], 99.95th=[41157], 00:14:48.194 | 99.99th=[42206] 00:14:48.194 bw ( KiB/s): min= 6288, max=15672, per=74.00%, avg=11134.40, stdev=3521.50, samples=5 00:14:48.194 iops : min= 1572, max= 3918, avg=2783.60, stdev=880.37, samples=5 00:14:48.194 lat (usec) : 250=42.17%, 500=56.62%, 750=1.07%, 1000=0.04% 00:14:48.194 lat (msec) : 20=0.01%, 50=0.08% 00:14:48.194 cpu : usr=1.89%, sys=5.67%, ctx=8422, majf=0, minf=1 00:14:48.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.194 issued rwts: total=8420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.194 00:14:48.194 Run status group 0 (all jobs): 00:14:48.194 READ: bw=14.7MiB/s (15.4MB/s), 821KiB/s-11.3MiB/s (841kB/s-11.8MB/s), io=54.6MiB (57.3MB), run=2911-3718msec 00:14:48.194 00:14:48.194 Disk stats (read/write): 00:14:48.194 nvme0n1: ios=709/0, merge=0/0, ticks=3267/0, in_queue=3267, util=94.88% 00:14:48.194 nvme0n2: ios=585/0, merge=0/0, ticks=3532/0, in_queue=3532, util=95.02% 00:14:48.194 nvme0n3: ios=3672/0, merge=0/0, ticks=4127/0, in_queue=4127, util=99.22% 00:14:48.194 nvme0n4: ios=8391/0, merge=0/0, ticks=3706/0, in_queue=3706, util=99.12% 00:14:48.452 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.452 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:48.709 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.709 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:48.966 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.966 11:40:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:49.224 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:49.224 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:49.480 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:49.480 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3012650 00:14:49.480 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:49.480 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:49.738 nvmf hotplug test: fio failed as expected 00:14:49.738 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.023 rmmod nvme_tcp 00:14:50.023 rmmod nvme_fabrics 00:14:50.023 rmmod nvme_keyring 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3010658 ']' 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3010658 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3010658 ']' 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3010658 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3010658 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3010658' 00:14:50.023 killing process with pid 3010658 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3010658 00:14:50.023 11:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3010658 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.281 11:40:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.811 11:41:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.811 00:14:52.811 real 0m23.492s 00:14:52.811 user 1m22.236s 00:14:52.811 sys 0m6.683s 00:14:52.811 11:41:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.811 11:41:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.811 ************************************ 00:14:52.811 END TEST nvmf_fio_target 00:14:52.811 ************************************ 00:14:52.811 11:41:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:52.811 11:41:00 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:52.811 11:41:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.811 11:41:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.811 11:41:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.811 ************************************ 00:14:52.811 START TEST nvmf_bdevio 00:14:52.811 ************************************ 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:52.811 * Looking for test storage... 00:14:52.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.811 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.812 11:41:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:54.714 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:54.714 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:54.714 Found net devices under 0000:84:00.0: cvl_0_0 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:54.714 Found net devices under 0000:84:00.1: cvl_0_1 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.714 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:54.715 00:14:54.715 --- 10.0.0.2 ping statistics --- 00:14:54.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.715 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:14:54.715 00:14:54.715 --- 10.0.0.1 ping statistics --- 00:14:54.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.715 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3015387 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3015387 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3015387 ']' 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.715 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:54.715 [2024-07-15 11:41:02.622643] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:54.715 [2024-07-15 11:41:02.622734] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.715 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.715 [2024-07-15 11:41:02.690808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.973 [2024-07-15 11:41:02.794374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.973 [2024-07-15 11:41:02.794430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.973 [2024-07-15 11:41:02.794454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.973 [2024-07-15 11:41:02.794465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.973 [2024-07-15 11:41:02.794475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.973 [2024-07-15 11:41:02.794560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.973 [2024-07-15 11:41:02.794650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:54.973 [2024-07-15 11:41:02.794710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.973 [2024-07-15 11:41:02.794708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.973 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:55.231 [2024-07-15 11:41:02.960463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:55.232 Malloc0 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.232 11:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:55.232 [2024-07-15 11:41:03.011606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.232 { 00:14:55.232 "params": { 00:14:55.232 "name": "Nvme$subsystem", 00:14:55.232 "trtype": "$TEST_TRANSPORT", 00:14:55.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.232 "adrfam": "ipv4", 00:14:55.232 "trsvcid": "$NVMF_PORT", 00:14:55.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.232 "hdgst": ${hdgst:-false}, 00:14:55.232 "ddgst": ${ddgst:-false} 00:14:55.232 }, 00:14:55.232 "method": "bdev_nvme_attach_controller" 00:14:55.232 } 00:14:55.232 EOF 00:14:55.232 )") 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:55.232 11:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.232 "params": { 00:14:55.232 "name": "Nvme1", 00:14:55.232 "trtype": "tcp", 00:14:55.232 "traddr": "10.0.0.2", 00:14:55.232 "adrfam": "ipv4", 00:14:55.232 "trsvcid": "4420", 00:14:55.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.232 "hdgst": false, 00:14:55.232 "ddgst": false 00:14:55.232 }, 00:14:55.232 "method": "bdev_nvme_attach_controller" 00:14:55.232 }' 00:14:55.232 [2024-07-15 11:41:03.058860] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:55.232 [2024-07-15 11:41:03.058944] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015528 ] 00:14:55.232 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.232 [2024-07-15 11:41:03.119976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.488 [2024-07-15 11:41:03.238150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.488 [2024-07-15 11:41:03.238201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.489 [2024-07-15 11:41:03.238205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.745 I/O targets: 00:14:55.745 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:55.745 00:14:55.745 00:14:55.745 CUnit - A unit testing framework for C - Version 2.1-3 00:14:55.745 http://cunit.sourceforge.net/ 00:14:55.745 00:14:55.745 00:14:55.745 Suite: bdevio tests on: Nvme1n1 00:14:55.745 Test: blockdev write read block ...passed 00:14:55.746 Test: blockdev write zeroes read block ...passed 00:14:55.746 Test: blockdev write zeroes read no split ...passed 00:14:55.746 Test: blockdev write zeroes read split ...passed 00:14:56.003 Test: blockdev write zeroes read split partial ...passed 00:14:56.003 Test: blockdev reset ...[2024-07-15 11:41:03.740620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:56.003 [2024-07-15 11:41:03.740730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17abbd0 (9): Bad file descriptor 00:14:56.003 [2024-07-15 11:41:03.753843] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:56.003 passed 00:14:56.003 Test: blockdev write read 8 blocks ...passed 00:14:56.003 Test: blockdev write read size > 128k ...passed 00:14:56.003 Test: blockdev write read invalid size ...passed 00:14:56.003 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:56.003 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:56.003 Test: blockdev write read max offset ...passed 00:14:56.003 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:56.003 Test: blockdev writev readv 8 blocks ...passed 00:14:56.003 Test: blockdev writev readv 30 x 1block ...passed 00:14:56.261 Test: blockdev writev readv block ...passed 00:14:56.261 Test: blockdev writev readv size > 128k ...passed 00:14:56.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:56.261 Test: blockdev comparev and writev ...[2024-07-15 11:41:04.005299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.005725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.005757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.005781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.005797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.006150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.006174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.006196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.006212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.006561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.006607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:56.261 [2024-07-15 11:41:04.006622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:56.261 passed 00:14:56.261 Test: blockdev nvme passthru rw ...passed 00:14:56.261 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:41:04.089043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.261 [2024-07-15 11:41:04.089070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:56.261 [2024-07-15 11:41:04.089238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.262 [2024-07-15 11:41:04.089260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:56.262 [2024-07-15 11:41:04.089413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.262 [2024-07-15 11:41:04.089434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:56.262 [2024-07-15 11:41:04.089586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:56.262 [2024-07-15 11:41:04.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:56.262 passed 00:14:56.262 Test: blockdev nvme admin passthru ...passed 00:14:56.262 Test: blockdev copy ...passed 00:14:56.262 00:14:56.262 Run Summary: Type Total Ran Passed Failed Inactive 00:14:56.262 suites 1 1 n/a 0 0 00:14:56.262 tests 23 23 23 0 0 00:14:56.262 asserts 152 152 152 0 n/a 00:14:56.262 00:14:56.262 Elapsed time = 1.112 seconds 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.520 rmmod nvme_tcp 00:14:56.520 rmmod nvme_fabrics 00:14:56.520 rmmod nvme_keyring 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3015387 ']' 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3015387 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3015387 ']' 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3015387 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3015387 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3015387' 00:14:56.520 killing process with pid 3015387 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3015387 00:14:56.520 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3015387 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.088 11:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.990 11:41:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.990 00:14:58.990 real 0m6.583s 00:14:58.990 user 0m11.063s 00:14:58.990 sys 0m2.152s 00:14:58.990 11:41:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.990 11:41:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.990 ************************************ 00:14:58.990 END TEST nvmf_bdevio 00:14:58.990 ************************************ 00:14:58.990 11:41:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.990 11:41:06 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:58.990 11:41:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.990 11:41:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.990 11:41:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.990 ************************************ 00:14:58.990 START TEST nvmf_auth_target 00:14:58.991 ************************************ 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:58.991 * Looking for test storage... 00:14:58.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.991 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:01.521 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:01.521 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.521 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:01.521 Found net devices under 0000:84:00.0: cvl_0_0 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:01.522 Found net devices under 0000:84:00.1: cvl_0_1 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.522 11:41:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:15:01.522 00:15:01.522 --- 10.0.0.2 ping statistics --- 00:15:01.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.522 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:15:01.522 00:15:01.522 --- 10.0.0.1 ping statistics --- 00:15:01.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.522 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3017615 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3017615 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3017615 ']' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.522 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3017756 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c8282dea465a6e5442fa82996ce511dea17c387fbb7f7ed5 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AiE 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c8282dea465a6e5442fa82996ce511dea17c387fbb7f7ed5 0 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c8282dea465a6e5442fa82996ce511dea17c387fbb7f7ed5 0 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c8282dea465a6e5442fa82996ce511dea17c387fbb7f7ed5 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AiE 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AiE 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.AiE 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b76423b529a7c927ee5de223bba920ec88f96a6a0e130ef9a56052216705b8b1 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2sI 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b76423b529a7c927ee5de223bba920ec88f96a6a0e130ef9a56052216705b8b1 3 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b76423b529a7c927ee5de223bba920ec88f96a6a0e130ef9a56052216705b8b1 3 00:15:01.781 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b76423b529a7c927ee5de223bba920ec88f96a6a0e130ef9a56052216705b8b1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2sI 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2sI 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2sI 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f87b3f315b64ee0278530ee7a7b8d34e 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YME 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f87b3f315b64ee0278530ee7a7b8d34e 1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f87b3f315b64ee0278530ee7a7b8d34e 1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f87b3f315b64ee0278530ee7a7b8d34e 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YME 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YME 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.YME 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bbe1ad48b5b0f9d5b59069201ae857df4b8c30d1eb94c803 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Eu 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bbe1ad48b5b0f9d5b59069201ae857df4b8c30d1eb94c803 2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bbe1ad48b5b0f9d5b59069201ae857df4b8c30d1eb94c803 2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bbe1ad48b5b0f9d5b59069201ae857df4b8c30d1eb94c803 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Eu 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Eu 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.7Eu 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3764dcb0ed92112b4483e2d0ef4730b680c88175800d6faa 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KuH 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3764dcb0ed92112b4483e2d0ef4730b680c88175800d6faa 2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3764dcb0ed92112b4483e2d0ef4730b680c88175800d6faa 2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3764dcb0ed92112b4483e2d0ef4730b680c88175800d6faa 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:01.782 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KuH 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KuH 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.KuH 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6544c3170ce584cdee2a7dd8d5a2e9f5 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.K1M 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6544c3170ce584cdee2a7dd8d5a2e9f5 1 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6544c3170ce584cdee2a7dd8d5a2e9f5 1 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6544c3170ce584cdee2a7dd8d5a2e9f5 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.K1M 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.K1M 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.K1M 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=187d22b785ea857db7012a7e8a0816a4e5c5f4fbc12ae98f71cd8c4fa7324670 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:02.040 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OT6 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 187d22b785ea857db7012a7e8a0816a4e5c5f4fbc12ae98f71cd8c4fa7324670 3 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 187d22b785ea857db7012a7e8a0816a4e5c5f4fbc12ae98f71cd8c4fa7324670 3 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=187d22b785ea857db7012a7e8a0816a4e5c5f4fbc12ae98f71cd8c4fa7324670 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OT6 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OT6 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.OT6 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3017615 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3017615 ']' 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.041 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.300 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.300 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3017756 /var/tmp/host.sock 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3017756 ']' 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.301 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AiE 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.AiE 00:15:02.558 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.AiE 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2sI ]] 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2sI 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2sI 00:15:02.815 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2sI 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YME 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YME 00:15:03.072 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YME 00:15:03.331 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.7Eu ]] 00:15:03.331 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Eu 00:15:03.331 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.331 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.589 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.589 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Eu 00:15:03.589 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Eu 00:15:03.589 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:03.589 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KuH 00:15:03.847 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.847 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.847 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.847 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KuH 00:15:03.847 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KuH 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.K1M ]] 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K1M 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K1M 00:15:04.106 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K1M 00:15:04.106 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:04.106 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OT6 00:15:04.106 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.106 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OT6 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OT6 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.365 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.623 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.194 00:15:05.194 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.194 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.194 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.194 { 00:15:05.194 "cntlid": 1, 00:15:05.194 "qid": 0, 00:15:05.194 "state": "enabled", 00:15:05.194 "thread": "nvmf_tgt_poll_group_000", 00:15:05.194 "listen_address": { 00:15:05.194 "trtype": "TCP", 00:15:05.194 "adrfam": "IPv4", 00:15:05.194 "traddr": "10.0.0.2", 00:15:05.194 "trsvcid": "4420" 00:15:05.194 }, 00:15:05.194 "peer_address": { 00:15:05.194 "trtype": "TCP", 00:15:05.194 "adrfam": "IPv4", 00:15:05.194 "traddr": "10.0.0.1", 00:15:05.194 "trsvcid": "49940" 00:15:05.194 }, 00:15:05.194 "auth": { 00:15:05.194 "state": "completed", 00:15:05.194 "digest": "sha256", 00:15:05.194 "dhgroup": "null" 00:15:05.194 } 00:15:05.194 } 00:15:05.194 ]' 00:15:05.194 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.454 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.711 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.642 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.900 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.158 00:15:07.158 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.158 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.158 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.437 { 00:15:07.437 "cntlid": 3, 00:15:07.437 "qid": 0, 00:15:07.437 "state": "enabled", 00:15:07.437 "thread": "nvmf_tgt_poll_group_000", 00:15:07.437 "listen_address": { 00:15:07.437 "trtype": "TCP", 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.2", 00:15:07.437 "trsvcid": "4420" 00:15:07.437 }, 00:15:07.437 "peer_address": { 00:15:07.437 "trtype": "TCP", 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.1", 00:15:07.437 "trsvcid": "49960" 00:15:07.437 }, 00:15:07.437 "auth": { 00:15:07.437 "state": "completed", 00:15:07.437 "digest": "sha256", 00:15:07.437 "dhgroup": "null" 00:15:07.437 } 00:15:07.437 } 00:15:07.437 ]' 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.437 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.702 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.635 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.892 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.155 00:15:09.155 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.155 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.155 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.449 { 00:15:09.449 "cntlid": 5, 00:15:09.449 "qid": 0, 00:15:09.449 "state": "enabled", 00:15:09.449 "thread": "nvmf_tgt_poll_group_000", 00:15:09.449 "listen_address": { 00:15:09.449 "trtype": "TCP", 00:15:09.449 "adrfam": "IPv4", 00:15:09.449 "traddr": "10.0.0.2", 00:15:09.449 "trsvcid": "4420" 00:15:09.449 }, 00:15:09.449 "peer_address": { 00:15:09.449 "trtype": "TCP", 00:15:09.449 "adrfam": "IPv4", 00:15:09.449 "traddr": "10.0.0.1", 00:15:09.449 "trsvcid": "49992" 00:15:09.449 }, 00:15:09.449 "auth": { 00:15:09.449 "state": "completed", 00:15:09.449 "digest": "sha256", 00:15:09.449 "dhgroup": "null" 00:15:09.449 } 00:15:09.449 } 00:15:09.449 ]' 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.449 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.706 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:10.643 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:10.920 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.177 00:15:11.177 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.177 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.177 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.435 { 00:15:11.435 "cntlid": 7, 00:15:11.435 "qid": 0, 00:15:11.435 "state": "enabled", 00:15:11.435 "thread": "nvmf_tgt_poll_group_000", 00:15:11.435 "listen_address": { 00:15:11.435 "trtype": "TCP", 00:15:11.435 "adrfam": "IPv4", 00:15:11.435 "traddr": "10.0.0.2", 00:15:11.435 "trsvcid": "4420" 00:15:11.435 }, 00:15:11.435 "peer_address": { 00:15:11.435 "trtype": "TCP", 00:15:11.435 "adrfam": "IPv4", 00:15:11.435 "traddr": "10.0.0.1", 00:15:11.435 "trsvcid": "50016" 00:15:11.435 }, 00:15:11.435 "auth": { 00:15:11.435 "state": "completed", 00:15:11.435 "digest": "sha256", 00:15:11.435 "dhgroup": "null" 00:15:11.435 } 00:15:11.435 } 00:15:11.435 ]' 00:15:11.435 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.692 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.949 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.881 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.139 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.396 00:15:13.396 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.396 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.396 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.653 { 00:15:13.653 "cntlid": 9, 00:15:13.653 "qid": 0, 00:15:13.653 "state": "enabled", 00:15:13.653 "thread": "nvmf_tgt_poll_group_000", 00:15:13.653 "listen_address": { 00:15:13.653 "trtype": "TCP", 00:15:13.653 "adrfam": "IPv4", 00:15:13.653 "traddr": "10.0.0.2", 00:15:13.653 "trsvcid": "4420" 00:15:13.653 }, 00:15:13.653 "peer_address": { 00:15:13.653 "trtype": "TCP", 00:15:13.653 "adrfam": "IPv4", 00:15:13.653 "traddr": "10.0.0.1", 00:15:13.653 "trsvcid": "40700" 00:15:13.653 }, 00:15:13.653 "auth": { 00:15:13.653 "state": "completed", 00:15:13.653 "digest": "sha256", 00:15:13.653 "dhgroup": "ffdhe2048" 00:15:13.653 } 00:15:13.653 } 00:15:13.653 ]' 00:15:13.653 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.911 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.169 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.106 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.364 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.622 00:15:15.622 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.622 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.622 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.879 { 00:15:15.879 "cntlid": 11, 00:15:15.879 "qid": 0, 00:15:15.879 "state": "enabled", 00:15:15.879 "thread": "nvmf_tgt_poll_group_000", 00:15:15.879 "listen_address": { 00:15:15.879 "trtype": "TCP", 00:15:15.879 "adrfam": "IPv4", 00:15:15.879 "traddr": "10.0.0.2", 00:15:15.879 "trsvcid": "4420" 00:15:15.879 }, 00:15:15.879 "peer_address": { 00:15:15.879 "trtype": "TCP", 00:15:15.879 "adrfam": "IPv4", 00:15:15.879 "traddr": "10.0.0.1", 00:15:15.879 "trsvcid": "40738" 00:15:15.879 }, 00:15:15.879 "auth": { 00:15:15.879 "state": "completed", 00:15:15.879 "digest": "sha256", 00:15:15.879 "dhgroup": "ffdhe2048" 00:15:15.879 } 00:15:15.879 } 00:15:15.879 ]' 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.879 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.137 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.137 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.137 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.137 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:17.073 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.073 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:17.073 11:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.073 11:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.073 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.073 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.073 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.073 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.331 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.589 00:15:17.589 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.589 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.589 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.847 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.847 { 00:15:17.847 "cntlid": 13, 00:15:17.847 "qid": 0, 00:15:17.847 "state": "enabled", 00:15:17.847 "thread": "nvmf_tgt_poll_group_000", 00:15:17.847 "listen_address": { 00:15:17.847 "trtype": "TCP", 00:15:17.847 "adrfam": "IPv4", 00:15:17.847 "traddr": "10.0.0.2", 00:15:17.847 "trsvcid": "4420" 00:15:17.847 }, 00:15:17.847 "peer_address": { 00:15:17.847 "trtype": "TCP", 00:15:17.847 "adrfam": "IPv4", 00:15:17.847 "traddr": "10.0.0.1", 00:15:17.847 "trsvcid": "40758" 00:15:17.847 }, 00:15:17.848 "auth": { 00:15:17.848 "state": "completed", 00:15:17.848 "digest": "sha256", 00:15:17.848 "dhgroup": "ffdhe2048" 00:15:17.848 } 00:15:17.848 } 00:15:17.848 ]' 00:15:17.848 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.106 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.363 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.299 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:19.557 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:19.557 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.557 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.558 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.815 00:15:19.815 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.815 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.815 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.071 { 00:15:20.071 "cntlid": 15, 00:15:20.071 "qid": 0, 00:15:20.071 "state": "enabled", 00:15:20.071 "thread": "nvmf_tgt_poll_group_000", 00:15:20.071 "listen_address": { 00:15:20.071 "trtype": "TCP", 00:15:20.071 "adrfam": "IPv4", 00:15:20.071 "traddr": "10.0.0.2", 00:15:20.071 "trsvcid": "4420" 00:15:20.071 }, 00:15:20.071 "peer_address": { 00:15:20.071 "trtype": "TCP", 00:15:20.071 "adrfam": "IPv4", 00:15:20.071 "traddr": "10.0.0.1", 00:15:20.071 "trsvcid": "40802" 00:15:20.071 }, 00:15:20.071 "auth": { 00:15:20.071 "state": "completed", 00:15:20.071 "digest": "sha256", 00:15:20.071 "dhgroup": "ffdhe2048" 00:15:20.071 } 00:15:20.071 } 00:15:20.071 ]' 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.071 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.071 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.071 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.071 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.330 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.266 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.524 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.782 00:15:21.782 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.782 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.782 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.040 { 00:15:22.040 "cntlid": 17, 00:15:22.040 "qid": 0, 00:15:22.040 "state": "enabled", 00:15:22.040 "thread": "nvmf_tgt_poll_group_000", 00:15:22.040 "listen_address": { 00:15:22.040 "trtype": "TCP", 00:15:22.040 "adrfam": "IPv4", 00:15:22.040 "traddr": "10.0.0.2", 00:15:22.040 "trsvcid": "4420" 00:15:22.040 }, 00:15:22.040 "peer_address": { 00:15:22.040 "trtype": "TCP", 00:15:22.040 "adrfam": "IPv4", 00:15:22.040 "traddr": "10.0.0.1", 00:15:22.040 "trsvcid": "40820" 00:15:22.040 }, 00:15:22.040 "auth": { 00:15:22.040 "state": "completed", 00:15:22.040 "digest": "sha256", 00:15:22.040 "dhgroup": "ffdhe3072" 00:15:22.040 } 00:15:22.040 } 00:15:22.040 ]' 00:15:22.040 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.298 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.555 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.491 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.492 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.492 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.749 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.057 00:15:24.057 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.057 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.057 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.315 { 00:15:24.315 "cntlid": 19, 00:15:24.315 "qid": 0, 00:15:24.315 "state": "enabled", 00:15:24.315 "thread": "nvmf_tgt_poll_group_000", 00:15:24.315 "listen_address": { 00:15:24.315 "trtype": "TCP", 00:15:24.315 "adrfam": "IPv4", 00:15:24.315 "traddr": "10.0.0.2", 00:15:24.315 "trsvcid": "4420" 00:15:24.315 }, 00:15:24.315 "peer_address": { 00:15:24.315 "trtype": "TCP", 00:15:24.315 "adrfam": "IPv4", 00:15:24.315 "traddr": "10.0.0.1", 00:15:24.315 "trsvcid": "39110" 00:15:24.315 }, 00:15:24.315 "auth": { 00:15:24.315 "state": "completed", 00:15:24.315 "digest": "sha256", 00:15:24.315 "dhgroup": "ffdhe3072" 00:15:24.315 } 00:15:24.315 } 00:15:24.315 ]' 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.315 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.572 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.505 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.762 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.328 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.328 { 00:15:26.328 "cntlid": 21, 00:15:26.328 "qid": 0, 00:15:26.328 "state": "enabled", 00:15:26.328 "thread": "nvmf_tgt_poll_group_000", 00:15:26.328 "listen_address": { 00:15:26.328 "trtype": "TCP", 00:15:26.328 "adrfam": "IPv4", 00:15:26.328 "traddr": "10.0.0.2", 00:15:26.328 "trsvcid": "4420" 00:15:26.328 }, 00:15:26.328 "peer_address": { 00:15:26.328 "trtype": "TCP", 00:15:26.328 "adrfam": "IPv4", 00:15:26.328 "traddr": "10.0.0.1", 00:15:26.328 "trsvcid": "39146" 00:15:26.328 }, 00:15:26.328 "auth": { 00:15:26.328 "state": "completed", 00:15:26.328 "digest": "sha256", 00:15:26.328 "dhgroup": "ffdhe3072" 00:15:26.328 } 00:15:26.328 } 00:15:26.328 ]' 00:15:26.328 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.586 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.843 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:27.779 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.779 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:27.779 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.779 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.779 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.780 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.780 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.780 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.038 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.296 00:15:28.296 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.296 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.296 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.553 { 00:15:28.553 "cntlid": 23, 00:15:28.553 "qid": 0, 00:15:28.553 "state": "enabled", 00:15:28.553 "thread": "nvmf_tgt_poll_group_000", 00:15:28.553 "listen_address": { 00:15:28.553 "trtype": "TCP", 00:15:28.553 "adrfam": "IPv4", 00:15:28.553 "traddr": "10.0.0.2", 00:15:28.553 "trsvcid": "4420" 00:15:28.553 }, 00:15:28.553 "peer_address": { 00:15:28.553 "trtype": "TCP", 00:15:28.553 "adrfam": "IPv4", 00:15:28.553 "traddr": "10.0.0.1", 00:15:28.553 "trsvcid": "39184" 00:15:28.553 }, 00:15:28.553 "auth": { 00:15:28.553 "state": "completed", 00:15:28.553 "digest": "sha256", 00:15:28.553 "dhgroup": "ffdhe3072" 00:15:28.553 } 00:15:28.553 } 00:15:28.553 ]' 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.553 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.811 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.811 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.811 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.070 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.008 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.267 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.267 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.267 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.524 00:15:30.524 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.524 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.524 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.781 { 00:15:30.781 "cntlid": 25, 00:15:30.781 "qid": 0, 00:15:30.781 "state": "enabled", 00:15:30.781 "thread": "nvmf_tgt_poll_group_000", 00:15:30.781 "listen_address": { 00:15:30.781 "trtype": "TCP", 00:15:30.781 "adrfam": "IPv4", 00:15:30.781 "traddr": "10.0.0.2", 00:15:30.781 "trsvcid": "4420" 00:15:30.781 }, 00:15:30.781 "peer_address": { 00:15:30.781 "trtype": "TCP", 00:15:30.781 "adrfam": "IPv4", 00:15:30.781 "traddr": "10.0.0.1", 00:15:30.781 "trsvcid": "39208" 00:15:30.781 }, 00:15:30.781 "auth": { 00:15:30.781 "state": "completed", 00:15:30.781 "digest": "sha256", 00:15:30.781 "dhgroup": "ffdhe4096" 00:15:30.781 } 00:15:30.781 } 00:15:30.781 ]' 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.781 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.038 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.038 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.038 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.305 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:32.273 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.273 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:32.273 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.273 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.273 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.531 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.531 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.531 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.789 00:15:32.789 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.789 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.789 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.047 { 00:15:33.047 "cntlid": 27, 00:15:33.047 "qid": 0, 00:15:33.047 "state": "enabled", 00:15:33.047 "thread": "nvmf_tgt_poll_group_000", 00:15:33.047 "listen_address": { 00:15:33.047 "trtype": "TCP", 00:15:33.047 "adrfam": "IPv4", 00:15:33.047 "traddr": "10.0.0.2", 00:15:33.047 "trsvcid": "4420" 00:15:33.047 }, 00:15:33.047 "peer_address": { 00:15:33.047 "trtype": "TCP", 00:15:33.047 "adrfam": "IPv4", 00:15:33.047 "traddr": "10.0.0.1", 00:15:33.047 "trsvcid": "39946" 00:15:33.047 }, 00:15:33.047 "auth": { 00:15:33.047 "state": "completed", 00:15:33.047 "digest": "sha256", 00:15:33.047 "dhgroup": "ffdhe4096" 00:15:33.047 } 00:15:33.047 } 00:15:33.047 ]' 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.047 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.047 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.047 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.047 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.305 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.241 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.500 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.066 00:15:35.066 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.066 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.066 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.334 { 00:15:35.334 "cntlid": 29, 00:15:35.334 "qid": 0, 00:15:35.334 "state": "enabled", 00:15:35.334 "thread": "nvmf_tgt_poll_group_000", 00:15:35.334 "listen_address": { 00:15:35.334 "trtype": "TCP", 00:15:35.334 "adrfam": "IPv4", 00:15:35.334 "traddr": "10.0.0.2", 00:15:35.334 "trsvcid": "4420" 00:15:35.334 }, 00:15:35.334 "peer_address": { 00:15:35.334 "trtype": "TCP", 00:15:35.334 "adrfam": "IPv4", 00:15:35.334 "traddr": "10.0.0.1", 00:15:35.334 "trsvcid": "39964" 00:15:35.334 }, 00:15:35.334 "auth": { 00:15:35.334 "state": "completed", 00:15:35.334 "digest": "sha256", 00:15:35.334 "dhgroup": "ffdhe4096" 00:15:35.334 } 00:15:35.334 } 00:15:35.334 ]' 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.334 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.598 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.538 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.796 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.364 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.364 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.622 { 00:15:37.622 "cntlid": 31, 00:15:37.622 "qid": 0, 00:15:37.622 "state": "enabled", 00:15:37.622 "thread": "nvmf_tgt_poll_group_000", 00:15:37.622 "listen_address": { 00:15:37.622 "trtype": "TCP", 00:15:37.622 "adrfam": "IPv4", 00:15:37.622 "traddr": "10.0.0.2", 00:15:37.622 "trsvcid": "4420" 00:15:37.622 }, 00:15:37.622 "peer_address": { 00:15:37.622 "trtype": "TCP", 00:15:37.622 "adrfam": "IPv4", 00:15:37.622 "traddr": "10.0.0.1", 00:15:37.622 "trsvcid": "39982" 00:15:37.622 }, 00:15:37.622 "auth": { 00:15:37.622 "state": "completed", 00:15:37.622 "digest": "sha256", 00:15:37.622 "dhgroup": "ffdhe4096" 00:15:37.622 } 00:15:37.622 } 00:15:37.622 ]' 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.622 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.879 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.816 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.074 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.643 00:15:39.643 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.643 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.643 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.901 { 00:15:39.901 "cntlid": 33, 00:15:39.901 "qid": 0, 00:15:39.901 "state": "enabled", 00:15:39.901 "thread": "nvmf_tgt_poll_group_000", 00:15:39.901 "listen_address": { 00:15:39.901 "trtype": "TCP", 00:15:39.901 "adrfam": "IPv4", 00:15:39.901 "traddr": "10.0.0.2", 00:15:39.901 "trsvcid": "4420" 00:15:39.901 }, 00:15:39.901 "peer_address": { 00:15:39.901 "trtype": "TCP", 00:15:39.901 "adrfam": "IPv4", 00:15:39.901 "traddr": "10.0.0.1", 00:15:39.901 "trsvcid": "40004" 00:15:39.901 }, 00:15:39.901 "auth": { 00:15:39.901 "state": "completed", 00:15:39.901 "digest": "sha256", 00:15:39.901 "dhgroup": "ffdhe6144" 00:15:39.901 } 00:15:39.901 } 00:15:39.901 ]' 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.901 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.180 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.116 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.373 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.940 00:15:41.940 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.940 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.940 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.198 { 00:15:42.198 "cntlid": 35, 00:15:42.198 "qid": 0, 00:15:42.198 "state": "enabled", 00:15:42.198 "thread": "nvmf_tgt_poll_group_000", 00:15:42.198 "listen_address": { 00:15:42.198 "trtype": "TCP", 00:15:42.198 "adrfam": "IPv4", 00:15:42.198 "traddr": "10.0.0.2", 00:15:42.198 "trsvcid": "4420" 00:15:42.198 }, 00:15:42.198 "peer_address": { 00:15:42.198 "trtype": "TCP", 00:15:42.198 "adrfam": "IPv4", 00:15:42.198 "traddr": "10.0.0.1", 00:15:42.198 "trsvcid": "40022" 00:15:42.198 }, 00:15:42.198 "auth": { 00:15:42.198 "state": "completed", 00:15:42.198 "digest": "sha256", 00:15:42.198 "dhgroup": "ffdhe6144" 00:15:42.198 } 00:15:42.198 } 00:15:42.198 ]' 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.198 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.455 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.455 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.455 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.455 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.455 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.713 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.649 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.907 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.474 00:15:44.474 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.475 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.475 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.475 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.475 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.733 { 00:15:44.733 "cntlid": 37, 00:15:44.733 "qid": 0, 00:15:44.733 "state": "enabled", 00:15:44.733 "thread": "nvmf_tgt_poll_group_000", 00:15:44.733 "listen_address": { 00:15:44.733 "trtype": "TCP", 00:15:44.733 "adrfam": "IPv4", 00:15:44.733 "traddr": "10.0.0.2", 00:15:44.733 "trsvcid": "4420" 00:15:44.733 }, 00:15:44.733 "peer_address": { 00:15:44.733 "trtype": "TCP", 00:15:44.733 "adrfam": "IPv4", 00:15:44.733 "traddr": "10.0.0.1", 00:15:44.733 "trsvcid": "40486" 00:15:44.733 }, 00:15:44.733 "auth": { 00:15:44.733 "state": "completed", 00:15:44.733 "digest": "sha256", 00:15:44.733 "dhgroup": "ffdhe6144" 00:15:44.733 } 00:15:44.733 } 00:15:44.733 ]' 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.733 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.991 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:45.926 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.927 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.185 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.185 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.185 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.185 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.753 00:15:46.753 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.753 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.753 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.012 { 00:15:47.012 "cntlid": 39, 00:15:47.012 "qid": 0, 00:15:47.012 "state": "enabled", 00:15:47.012 "thread": "nvmf_tgt_poll_group_000", 00:15:47.012 "listen_address": { 00:15:47.012 "trtype": "TCP", 00:15:47.012 "adrfam": "IPv4", 00:15:47.012 "traddr": "10.0.0.2", 00:15:47.012 "trsvcid": "4420" 00:15:47.012 }, 00:15:47.012 "peer_address": { 00:15:47.012 "trtype": "TCP", 00:15:47.012 "adrfam": "IPv4", 00:15:47.012 "traddr": "10.0.0.1", 00:15:47.012 "trsvcid": "40520" 00:15:47.012 }, 00:15:47.012 "auth": { 00:15:47.012 "state": "completed", 00:15:47.012 "digest": "sha256", 00:15:47.012 "dhgroup": "ffdhe6144" 00:15:47.012 } 00:15:47.012 } 00:15:47.012 ]' 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.012 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.271 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.208 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.466 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.401 00:15:49.401 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.401 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.401 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.659 { 00:15:49.659 "cntlid": 41, 00:15:49.659 "qid": 0, 00:15:49.659 "state": "enabled", 00:15:49.659 "thread": "nvmf_tgt_poll_group_000", 00:15:49.659 "listen_address": { 00:15:49.659 "trtype": "TCP", 00:15:49.659 "adrfam": "IPv4", 00:15:49.659 "traddr": "10.0.0.2", 00:15:49.659 "trsvcid": "4420" 00:15:49.659 }, 00:15:49.659 "peer_address": { 00:15:49.659 "trtype": "TCP", 00:15:49.659 "adrfam": "IPv4", 00:15:49.659 "traddr": "10.0.0.1", 00:15:49.659 "trsvcid": "40544" 00:15:49.659 }, 00:15:49.659 "auth": { 00:15:49.659 "state": "completed", 00:15:49.659 "digest": "sha256", 00:15:49.659 "dhgroup": "ffdhe8192" 00:15:49.659 } 00:15:49.659 } 00:15:49.659 ]' 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.659 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.917 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.852 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.109 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.110 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.110 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.042 00:15:52.042 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.042 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.042 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.300 { 00:15:52.300 "cntlid": 43, 00:15:52.300 "qid": 0, 00:15:52.300 "state": "enabled", 00:15:52.300 "thread": "nvmf_tgt_poll_group_000", 00:15:52.300 "listen_address": { 00:15:52.300 "trtype": "TCP", 00:15:52.300 "adrfam": "IPv4", 00:15:52.300 "traddr": "10.0.0.2", 00:15:52.300 "trsvcid": "4420" 00:15:52.300 }, 00:15:52.300 "peer_address": { 00:15:52.300 "trtype": "TCP", 00:15:52.300 "adrfam": "IPv4", 00:15:52.300 "traddr": "10.0.0.1", 00:15:52.300 "trsvcid": "40578" 00:15:52.300 }, 00:15:52.300 "auth": { 00:15:52.300 "state": "completed", 00:15:52.300 "digest": "sha256", 00:15:52.300 "dhgroup": "ffdhe8192" 00:15:52.300 } 00:15:52.300 } 00:15:52.300 ]' 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.300 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.557 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.492 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.750 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.720 00:15:54.720 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.720 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.720 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.001 { 00:15:55.001 "cntlid": 45, 00:15:55.001 "qid": 0, 00:15:55.001 "state": "enabled", 00:15:55.001 "thread": "nvmf_tgt_poll_group_000", 00:15:55.001 "listen_address": { 00:15:55.001 "trtype": "TCP", 00:15:55.001 "adrfam": "IPv4", 00:15:55.001 "traddr": "10.0.0.2", 00:15:55.001 "trsvcid": "4420" 00:15:55.001 }, 00:15:55.001 "peer_address": { 00:15:55.001 "trtype": "TCP", 00:15:55.001 "adrfam": "IPv4", 00:15:55.001 "traddr": "10.0.0.1", 00:15:55.001 "trsvcid": "36040" 00:15:55.001 }, 00:15:55.001 "auth": { 00:15:55.001 "state": "completed", 00:15:55.001 "digest": "sha256", 00:15:55.001 "dhgroup": "ffdhe8192" 00:15:55.001 } 00:15:55.001 } 00:15:55.001 ]' 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.001 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.259 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.194 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.452 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.392 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.392 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.392 { 00:15:57.392 "cntlid": 47, 00:15:57.392 "qid": 0, 00:15:57.392 "state": "enabled", 00:15:57.392 "thread": "nvmf_tgt_poll_group_000", 00:15:57.392 "listen_address": { 00:15:57.392 "trtype": "TCP", 00:15:57.392 "adrfam": "IPv4", 00:15:57.392 "traddr": "10.0.0.2", 00:15:57.392 "trsvcid": "4420" 00:15:57.392 }, 00:15:57.392 "peer_address": { 00:15:57.392 "trtype": "TCP", 00:15:57.392 "adrfam": "IPv4", 00:15:57.392 "traddr": "10.0.0.1", 00:15:57.392 "trsvcid": "36060" 00:15:57.392 }, 00:15:57.392 "auth": { 00:15:57.392 "state": "completed", 00:15:57.392 "digest": "sha256", 00:15:57.392 "dhgroup": "ffdhe8192" 00:15:57.392 } 00:15:57.392 } 00:15:57.392 ]' 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.651 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.909 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.847 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.105 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.363 00:15:59.363 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.363 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.363 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.621 { 00:15:59.621 "cntlid": 49, 00:15:59.621 "qid": 0, 00:15:59.621 "state": "enabled", 00:15:59.621 "thread": "nvmf_tgt_poll_group_000", 00:15:59.621 "listen_address": { 00:15:59.621 "trtype": "TCP", 00:15:59.621 "adrfam": "IPv4", 00:15:59.621 "traddr": "10.0.0.2", 00:15:59.621 "trsvcid": "4420" 00:15:59.621 }, 00:15:59.621 "peer_address": { 00:15:59.621 "trtype": "TCP", 00:15:59.621 "adrfam": "IPv4", 00:15:59.621 "traddr": "10.0.0.1", 00:15:59.621 "trsvcid": "36088" 00:15:59.621 }, 00:15:59.621 "auth": { 00:15:59.621 "state": "completed", 00:15:59.621 "digest": "sha384", 00:15:59.621 "dhgroup": "null" 00:15:59.621 } 00:15:59.621 } 00:15:59.621 ]' 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:59.621 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.879 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.879 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.879 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.137 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.071 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.330 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.589 00:16:01.589 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.589 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.589 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.846 { 00:16:01.846 "cntlid": 51, 00:16:01.846 "qid": 0, 00:16:01.846 "state": "enabled", 00:16:01.846 "thread": "nvmf_tgt_poll_group_000", 00:16:01.846 "listen_address": { 00:16:01.846 "trtype": "TCP", 00:16:01.846 "adrfam": "IPv4", 00:16:01.846 "traddr": "10.0.0.2", 00:16:01.846 "trsvcid": "4420" 00:16:01.846 }, 00:16:01.846 "peer_address": { 00:16:01.846 "trtype": "TCP", 00:16:01.846 "adrfam": "IPv4", 00:16:01.846 "traddr": "10.0.0.1", 00:16:01.846 "trsvcid": "36116" 00:16:01.846 }, 00:16:01.846 "auth": { 00:16:01.846 "state": "completed", 00:16:01.846 "digest": "sha384", 00:16:01.846 "dhgroup": "null" 00:16:01.846 } 00:16:01.846 } 00:16:01.846 ]' 00:16:01.846 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.847 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.104 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.040 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.297 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.554 00:16:03.554 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.554 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.554 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.811 { 00:16:03.811 "cntlid": 53, 00:16:03.811 "qid": 0, 00:16:03.811 "state": "enabled", 00:16:03.811 "thread": "nvmf_tgt_poll_group_000", 00:16:03.811 "listen_address": { 00:16:03.811 "trtype": "TCP", 00:16:03.811 "adrfam": "IPv4", 00:16:03.811 "traddr": "10.0.0.2", 00:16:03.811 "trsvcid": "4420" 00:16:03.811 }, 00:16:03.811 "peer_address": { 00:16:03.811 "trtype": "TCP", 00:16:03.811 "adrfam": "IPv4", 00:16:03.811 "traddr": "10.0.0.1", 00:16:03.811 "trsvcid": "57076" 00:16:03.811 }, 00:16:03.811 "auth": { 00:16:03.811 "state": "completed", 00:16:03.811 "digest": "sha384", 00:16:03.811 "dhgroup": "null" 00:16:03.811 } 00:16:03.811 } 00:16:03.811 ]' 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.811 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.378 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.945 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.510 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.510 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.768 { 00:16:05.768 "cntlid": 55, 00:16:05.768 "qid": 0, 00:16:05.768 "state": "enabled", 00:16:05.768 "thread": "nvmf_tgt_poll_group_000", 00:16:05.768 "listen_address": { 00:16:05.768 "trtype": "TCP", 00:16:05.768 "adrfam": "IPv4", 00:16:05.768 "traddr": "10.0.0.2", 00:16:05.768 "trsvcid": "4420" 00:16:05.768 }, 00:16:05.768 "peer_address": { 00:16:05.768 "trtype": "TCP", 00:16:05.768 "adrfam": "IPv4", 00:16:05.768 "traddr": "10.0.0.1", 00:16:05.768 "trsvcid": "57084" 00:16:05.768 }, 00:16:05.768 "auth": { 00:16:05.768 "state": "completed", 00:16:05.768 "digest": "sha384", 00:16:05.768 "dhgroup": "null" 00:16:05.768 } 00:16:05.768 } 00:16:05.768 ]' 00:16:05.768 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.026 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.283 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.217 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.218 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.218 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.477 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.735 00:16:07.735 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.735 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.735 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.992 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.992 { 00:16:07.992 "cntlid": 57, 00:16:07.993 "qid": 0, 00:16:07.993 "state": "enabled", 00:16:07.993 "thread": "nvmf_tgt_poll_group_000", 00:16:07.993 "listen_address": { 00:16:07.993 "trtype": "TCP", 00:16:07.993 "adrfam": "IPv4", 00:16:07.993 "traddr": "10.0.0.2", 00:16:07.993 "trsvcid": "4420" 00:16:07.993 }, 00:16:07.993 "peer_address": { 00:16:07.993 "trtype": "TCP", 00:16:07.993 "adrfam": "IPv4", 00:16:07.993 "traddr": "10.0.0.1", 00:16:07.993 "trsvcid": "57120" 00:16:07.993 }, 00:16:07.993 "auth": { 00:16:07.993 "state": "completed", 00:16:07.993 "digest": "sha384", 00:16:07.993 "dhgroup": "ffdhe2048" 00:16:07.993 } 00:16:07.993 } 00:16:07.993 ]' 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.993 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.251 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.187 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.445 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.704 00:16:09.704 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.704 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.704 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.961 { 00:16:09.961 "cntlid": 59, 00:16:09.961 "qid": 0, 00:16:09.961 "state": "enabled", 00:16:09.961 "thread": "nvmf_tgt_poll_group_000", 00:16:09.961 "listen_address": { 00:16:09.961 "trtype": "TCP", 00:16:09.961 "adrfam": "IPv4", 00:16:09.961 "traddr": "10.0.0.2", 00:16:09.961 "trsvcid": "4420" 00:16:09.961 }, 00:16:09.961 "peer_address": { 00:16:09.961 "trtype": "TCP", 00:16:09.961 "adrfam": "IPv4", 00:16:09.961 "traddr": "10.0.0.1", 00:16:09.961 "trsvcid": "57138" 00:16:09.961 }, 00:16:09.961 "auth": { 00:16:09.961 "state": "completed", 00:16:09.961 "digest": "sha384", 00:16:09.961 "dhgroup": "ffdhe2048" 00:16:09.961 } 00:16:09.961 } 00:16:09.961 ]' 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.961 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.219 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.219 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.219 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.219 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.219 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.478 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.413 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.671 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.929 00:16:11.929 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.929 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.929 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.187 { 00:16:12.187 "cntlid": 61, 00:16:12.187 "qid": 0, 00:16:12.187 "state": "enabled", 00:16:12.187 "thread": "nvmf_tgt_poll_group_000", 00:16:12.187 "listen_address": { 00:16:12.187 "trtype": "TCP", 00:16:12.187 "adrfam": "IPv4", 00:16:12.187 "traddr": "10.0.0.2", 00:16:12.187 "trsvcid": "4420" 00:16:12.187 }, 00:16:12.187 "peer_address": { 00:16:12.187 "trtype": "TCP", 00:16:12.187 "adrfam": "IPv4", 00:16:12.187 "traddr": "10.0.0.1", 00:16:12.187 "trsvcid": "57162" 00:16:12.187 }, 00:16:12.187 "auth": { 00:16:12.187 "state": "completed", 00:16:12.187 "digest": "sha384", 00:16:12.187 "dhgroup": "ffdhe2048" 00:16:12.187 } 00:16:12.187 } 00:16:12.187 ]' 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.187 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.446 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.383 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.953 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.212 00:16:14.212 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.212 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.212 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.470 { 00:16:14.470 "cntlid": 63, 00:16:14.470 "qid": 0, 00:16:14.470 "state": "enabled", 00:16:14.470 "thread": "nvmf_tgt_poll_group_000", 00:16:14.470 "listen_address": { 00:16:14.470 "trtype": "TCP", 00:16:14.470 "adrfam": "IPv4", 00:16:14.470 "traddr": "10.0.0.2", 00:16:14.470 "trsvcid": "4420" 00:16:14.470 }, 00:16:14.470 "peer_address": { 00:16:14.470 "trtype": "TCP", 00:16:14.470 "adrfam": "IPv4", 00:16:14.470 "traddr": "10.0.0.1", 00:16:14.470 "trsvcid": "55246" 00:16:14.470 }, 00:16:14.470 "auth": { 00:16:14.470 "state": "completed", 00:16:14.470 "digest": "sha384", 00:16:14.470 "dhgroup": "ffdhe2048" 00:16:14.470 } 00:16:14.470 } 00:16:14.470 ]' 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.470 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.729 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.666 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.924 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.491 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.491 { 00:16:16.491 "cntlid": 65, 00:16:16.491 "qid": 0, 00:16:16.491 "state": "enabled", 00:16:16.491 "thread": "nvmf_tgt_poll_group_000", 00:16:16.491 "listen_address": { 00:16:16.491 "trtype": "TCP", 00:16:16.491 "adrfam": "IPv4", 00:16:16.491 "traddr": "10.0.0.2", 00:16:16.491 "trsvcid": "4420" 00:16:16.491 }, 00:16:16.491 "peer_address": { 00:16:16.491 "trtype": "TCP", 00:16:16.491 "adrfam": "IPv4", 00:16:16.491 "traddr": "10.0.0.1", 00:16:16.491 "trsvcid": "55284" 00:16:16.491 }, 00:16:16.491 "auth": { 00:16:16.491 "state": "completed", 00:16:16.491 "digest": "sha384", 00:16:16.491 "dhgroup": "ffdhe3072" 00:16:16.491 } 00:16:16.491 } 00:16:16.491 ]' 00:16:16.491 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.781 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.062 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.996 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.254 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.511 00:16:18.511 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.511 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.511 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.769 { 00:16:18.769 "cntlid": 67, 00:16:18.769 "qid": 0, 00:16:18.769 "state": "enabled", 00:16:18.769 "thread": "nvmf_tgt_poll_group_000", 00:16:18.769 "listen_address": { 00:16:18.769 "trtype": "TCP", 00:16:18.769 "adrfam": "IPv4", 00:16:18.769 "traddr": "10.0.0.2", 00:16:18.769 "trsvcid": "4420" 00:16:18.769 }, 00:16:18.769 "peer_address": { 00:16:18.769 "trtype": "TCP", 00:16:18.769 "adrfam": "IPv4", 00:16:18.769 "traddr": "10.0.0.1", 00:16:18.769 "trsvcid": "55310" 00:16:18.769 }, 00:16:18.769 "auth": { 00:16:18.769 "state": "completed", 00:16:18.769 "digest": "sha384", 00:16:18.769 "dhgroup": "ffdhe3072" 00:16:18.769 } 00:16:18.769 } 00:16:18.769 ]' 00:16:18.769 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.027 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.284 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.218 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.218 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.794 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.794 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.051 { 00:16:21.051 "cntlid": 69, 00:16:21.051 "qid": 0, 00:16:21.051 "state": "enabled", 00:16:21.051 "thread": "nvmf_tgt_poll_group_000", 00:16:21.051 "listen_address": { 00:16:21.051 "trtype": "TCP", 00:16:21.051 "adrfam": "IPv4", 00:16:21.051 "traddr": "10.0.0.2", 00:16:21.051 "trsvcid": "4420" 00:16:21.051 }, 00:16:21.051 "peer_address": { 00:16:21.051 "trtype": "TCP", 00:16:21.051 "adrfam": "IPv4", 00:16:21.051 "traddr": "10.0.0.1", 00:16:21.051 "trsvcid": "55330" 00:16:21.051 }, 00:16:21.051 "auth": { 00:16:21.051 "state": "completed", 00:16:21.051 "digest": "sha384", 00:16:21.051 "dhgroup": "ffdhe3072" 00:16:21.051 } 00:16:21.051 } 00:16:21.051 ]' 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.051 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.308 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:22.243 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.500 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.758 00:16:22.758 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.758 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.758 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.015 { 00:16:23.015 "cntlid": 71, 00:16:23.015 "qid": 0, 00:16:23.015 "state": "enabled", 00:16:23.015 "thread": "nvmf_tgt_poll_group_000", 00:16:23.015 "listen_address": { 00:16:23.015 "trtype": "TCP", 00:16:23.015 "adrfam": "IPv4", 00:16:23.015 "traddr": "10.0.0.2", 00:16:23.015 "trsvcid": "4420" 00:16:23.015 }, 00:16:23.015 "peer_address": { 00:16:23.015 "trtype": "TCP", 00:16:23.015 "adrfam": "IPv4", 00:16:23.015 "traddr": "10.0.0.1", 00:16:23.015 "trsvcid": "55708" 00:16:23.015 }, 00:16:23.015 "auth": { 00:16:23.015 "state": "completed", 00:16:23.015 "digest": "sha384", 00:16:23.015 "dhgroup": "ffdhe3072" 00:16:23.015 } 00:16:23.015 } 00:16:23.015 ]' 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.015 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.272 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.272 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.272 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.272 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.272 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.529 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:24.460 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.718 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.976 00:16:24.976 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.976 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.976 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.267 { 00:16:25.267 "cntlid": 73, 00:16:25.267 "qid": 0, 00:16:25.267 "state": "enabled", 00:16:25.267 "thread": "nvmf_tgt_poll_group_000", 00:16:25.267 "listen_address": { 00:16:25.267 "trtype": "TCP", 00:16:25.267 "adrfam": "IPv4", 00:16:25.267 "traddr": "10.0.0.2", 00:16:25.267 "trsvcid": "4420" 00:16:25.267 }, 00:16:25.267 "peer_address": { 00:16:25.267 "trtype": "TCP", 00:16:25.267 "adrfam": "IPv4", 00:16:25.267 "traddr": "10.0.0.1", 00:16:25.267 "trsvcid": "55738" 00:16:25.267 }, 00:16:25.267 "auth": { 00:16:25.267 "state": "completed", 00:16:25.267 "digest": "sha384", 00:16:25.267 "dhgroup": "ffdhe4096" 00:16:25.267 } 00:16:25.267 } 00:16:25.267 ]' 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.267 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.524 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.524 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.524 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.782 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.758 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.323 00:16:27.323 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.323 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.323 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.581 { 00:16:27.581 "cntlid": 75, 00:16:27.581 "qid": 0, 00:16:27.581 "state": "enabled", 00:16:27.581 "thread": "nvmf_tgt_poll_group_000", 00:16:27.581 "listen_address": { 00:16:27.581 "trtype": "TCP", 00:16:27.581 "adrfam": "IPv4", 00:16:27.581 "traddr": "10.0.0.2", 00:16:27.581 "trsvcid": "4420" 00:16:27.581 }, 00:16:27.581 "peer_address": { 00:16:27.581 "trtype": "TCP", 00:16:27.581 "adrfam": "IPv4", 00:16:27.581 "traddr": "10.0.0.1", 00:16:27.581 "trsvcid": "55776" 00:16:27.581 }, 00:16:27.581 "auth": { 00:16:27.581 "state": "completed", 00:16:27.581 "digest": "sha384", 00:16:27.581 "dhgroup": "ffdhe4096" 00:16:27.581 } 00:16:27.581 } 00:16:27.581 ]' 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.581 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.840 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.773 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.031 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.596 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.596 { 00:16:29.596 "cntlid": 77, 00:16:29.596 "qid": 0, 00:16:29.596 "state": "enabled", 00:16:29.596 "thread": "nvmf_tgt_poll_group_000", 00:16:29.596 "listen_address": { 00:16:29.596 "trtype": "TCP", 00:16:29.596 "adrfam": "IPv4", 00:16:29.596 "traddr": "10.0.0.2", 00:16:29.596 "trsvcid": "4420" 00:16:29.596 }, 00:16:29.596 "peer_address": { 00:16:29.596 "trtype": "TCP", 00:16:29.596 "adrfam": "IPv4", 00:16:29.596 "traddr": "10.0.0.1", 00:16:29.596 "trsvcid": "55804" 00:16:29.596 }, 00:16:29.596 "auth": { 00:16:29.596 "state": "completed", 00:16:29.596 "digest": "sha384", 00:16:29.596 "dhgroup": "ffdhe4096" 00:16:29.596 } 00:16:29.596 } 00:16:29.596 ]' 00:16:29.596 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.854 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.111 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.044 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.302 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.559 00:16:31.559 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.559 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.559 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.817 { 00:16:31.817 "cntlid": 79, 00:16:31.817 "qid": 0, 00:16:31.817 "state": "enabled", 00:16:31.817 "thread": "nvmf_tgt_poll_group_000", 00:16:31.817 "listen_address": { 00:16:31.817 "trtype": "TCP", 00:16:31.817 "adrfam": "IPv4", 00:16:31.817 "traddr": "10.0.0.2", 00:16:31.817 "trsvcid": "4420" 00:16:31.817 }, 00:16:31.817 "peer_address": { 00:16:31.817 "trtype": "TCP", 00:16:31.817 "adrfam": "IPv4", 00:16:31.817 "traddr": "10.0.0.1", 00:16:31.817 "trsvcid": "55834" 00:16:31.817 }, 00:16:31.817 "auth": { 00:16:31.817 "state": "completed", 00:16:31.817 "digest": "sha384", 00:16:31.817 "dhgroup": "ffdhe4096" 00:16:31.817 } 00:16:31.817 } 00:16:31.817 ]' 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.817 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.075 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.075 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.075 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:33.007 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.265 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.830 00:16:33.830 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.830 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.830 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.088 { 00:16:34.088 "cntlid": 81, 00:16:34.088 "qid": 0, 00:16:34.088 "state": "enabled", 00:16:34.088 "thread": "nvmf_tgt_poll_group_000", 00:16:34.088 "listen_address": { 00:16:34.088 "trtype": "TCP", 00:16:34.088 "adrfam": "IPv4", 00:16:34.088 "traddr": "10.0.0.2", 00:16:34.088 "trsvcid": "4420" 00:16:34.088 }, 00:16:34.088 "peer_address": { 00:16:34.088 "trtype": "TCP", 00:16:34.088 "adrfam": "IPv4", 00:16:34.088 "traddr": "10.0.0.1", 00:16:34.088 "trsvcid": "42394" 00:16:34.088 }, 00:16:34.088 "auth": { 00:16:34.088 "state": "completed", 00:16:34.088 "digest": "sha384", 00:16:34.088 "dhgroup": "ffdhe6144" 00:16:34.088 } 00:16:34.088 } 00:16:34.088 ]' 00:16:34.088 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.088 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.088 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.088 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.088 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.346 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.346 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.346 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.603 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.535 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.536 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.100 00:16:36.100 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.100 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.100 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.358 { 00:16:36.358 "cntlid": 83, 00:16:36.358 "qid": 0, 00:16:36.358 "state": "enabled", 00:16:36.358 "thread": "nvmf_tgt_poll_group_000", 00:16:36.358 "listen_address": { 00:16:36.358 "trtype": "TCP", 00:16:36.358 "adrfam": "IPv4", 00:16:36.358 "traddr": "10.0.0.2", 00:16:36.358 "trsvcid": "4420" 00:16:36.358 }, 00:16:36.358 "peer_address": { 00:16:36.358 "trtype": "TCP", 00:16:36.358 "adrfam": "IPv4", 00:16:36.358 "traddr": "10.0.0.1", 00:16:36.358 "trsvcid": "42408" 00:16:36.358 }, 00:16:36.358 "auth": { 00:16:36.358 "state": "completed", 00:16:36.358 "digest": "sha384", 00:16:36.358 "dhgroup": "ffdhe6144" 00:16:36.358 } 00:16:36.358 } 00:16:36.358 ]' 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.358 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.617 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.617 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.617 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.617 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.874 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.807 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.808 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.373 00:16:38.373 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.373 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.373 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.631 { 00:16:38.631 "cntlid": 85, 00:16:38.631 "qid": 0, 00:16:38.631 "state": "enabled", 00:16:38.631 "thread": "nvmf_tgt_poll_group_000", 00:16:38.631 "listen_address": { 00:16:38.631 "trtype": "TCP", 00:16:38.631 "adrfam": "IPv4", 00:16:38.631 "traddr": "10.0.0.2", 00:16:38.631 "trsvcid": "4420" 00:16:38.631 }, 00:16:38.631 "peer_address": { 00:16:38.631 "trtype": "TCP", 00:16:38.631 "adrfam": "IPv4", 00:16:38.631 "traddr": "10.0.0.1", 00:16:38.631 "trsvcid": "42434" 00:16:38.631 }, 00:16:38.631 "auth": { 00:16:38.631 "state": "completed", 00:16:38.631 "digest": "sha384", 00:16:38.631 "dhgroup": "ffdhe6144" 00:16:38.631 } 00:16:38.631 } 00:16:38.631 ]' 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.631 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.889 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.889 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.889 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.146 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.109 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.109 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.675 00:16:40.675 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.675 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.675 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.933 { 00:16:40.933 "cntlid": 87, 00:16:40.933 "qid": 0, 00:16:40.933 "state": "enabled", 00:16:40.933 "thread": "nvmf_tgt_poll_group_000", 00:16:40.933 "listen_address": { 00:16:40.933 "trtype": "TCP", 00:16:40.933 "adrfam": "IPv4", 00:16:40.933 "traddr": "10.0.0.2", 00:16:40.933 "trsvcid": "4420" 00:16:40.933 }, 00:16:40.933 "peer_address": { 00:16:40.933 "trtype": "TCP", 00:16:40.933 "adrfam": "IPv4", 00:16:40.933 "traddr": "10.0.0.1", 00:16:40.933 "trsvcid": "42454" 00:16:40.933 }, 00:16:40.933 "auth": { 00:16:40.933 "state": "completed", 00:16:40.933 "digest": "sha384", 00:16:40.933 "dhgroup": "ffdhe6144" 00:16:40.933 } 00:16:40.933 } 00:16:40.933 ]' 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.933 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.192 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.192 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.192 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.449 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.381 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.309 00:16:43.309 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.309 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.309 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.566 { 00:16:43.566 "cntlid": 89, 00:16:43.566 "qid": 0, 00:16:43.566 "state": "enabled", 00:16:43.566 "thread": "nvmf_tgt_poll_group_000", 00:16:43.566 "listen_address": { 00:16:43.566 "trtype": "TCP", 00:16:43.566 "adrfam": "IPv4", 00:16:43.566 "traddr": "10.0.0.2", 00:16:43.566 "trsvcid": "4420" 00:16:43.566 }, 00:16:43.566 "peer_address": { 00:16:43.566 "trtype": "TCP", 00:16:43.566 "adrfam": "IPv4", 00:16:43.566 "traddr": "10.0.0.1", 00:16:43.566 "trsvcid": "33404" 00:16:43.566 }, 00:16:43.566 "auth": { 00:16:43.566 "state": "completed", 00:16:43.566 "digest": "sha384", 00:16:43.566 "dhgroup": "ffdhe8192" 00:16:43.566 } 00:16:43.566 } 00:16:43.566 ]' 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.566 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.823 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:44.753 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.010 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.941 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.941 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.199 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.199 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.199 { 00:16:46.199 "cntlid": 91, 00:16:46.199 "qid": 0, 00:16:46.199 "state": "enabled", 00:16:46.199 "thread": "nvmf_tgt_poll_group_000", 00:16:46.199 "listen_address": { 00:16:46.199 "trtype": "TCP", 00:16:46.199 "adrfam": "IPv4", 00:16:46.199 "traddr": "10.0.0.2", 00:16:46.199 "trsvcid": "4420" 00:16:46.199 }, 00:16:46.199 "peer_address": { 00:16:46.199 "trtype": "TCP", 00:16:46.199 "adrfam": "IPv4", 00:16:46.199 "traddr": "10.0.0.1", 00:16:46.199 "trsvcid": "33414" 00:16:46.199 }, 00:16:46.199 "auth": { 00:16:46.199 "state": "completed", 00:16:46.199 "digest": "sha384", 00:16:46.199 "dhgroup": "ffdhe8192" 00:16:46.199 } 00:16:46.199 } 00:16:46.199 ]' 00:16:46.199 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.199 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.199 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.199 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.199 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.199 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.199 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.199 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.457 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.389 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.646 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.647 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.577 00:16:48.577 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.577 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.577 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.577 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.577 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.578 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.578 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.578 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.578 { 00:16:48.578 "cntlid": 93, 00:16:48.578 "qid": 0, 00:16:48.578 "state": "enabled", 00:16:48.578 "thread": "nvmf_tgt_poll_group_000", 00:16:48.578 "listen_address": { 00:16:48.578 "trtype": "TCP", 00:16:48.578 "adrfam": "IPv4", 00:16:48.578 "traddr": "10.0.0.2", 00:16:48.578 "trsvcid": "4420" 00:16:48.578 }, 00:16:48.578 "peer_address": { 00:16:48.578 "trtype": "TCP", 00:16:48.578 "adrfam": "IPv4", 00:16:48.578 "traddr": "10.0.0.1", 00:16:48.578 "trsvcid": "33440" 00:16:48.578 }, 00:16:48.578 "auth": { 00:16:48.578 "state": "completed", 00:16:48.578 "digest": "sha384", 00:16:48.578 "dhgroup": "ffdhe8192" 00:16:48.578 } 00:16:48.578 } 00:16:48.578 ]' 00:16:48.578 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.837 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.094 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.026 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.283 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.217 00:16:51.217 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.217 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.217 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.217 { 00:16:51.217 "cntlid": 95, 00:16:51.217 "qid": 0, 00:16:51.217 "state": "enabled", 00:16:51.217 "thread": "nvmf_tgt_poll_group_000", 00:16:51.217 "listen_address": { 00:16:51.217 "trtype": "TCP", 00:16:51.217 "adrfam": "IPv4", 00:16:51.217 "traddr": "10.0.0.2", 00:16:51.217 "trsvcid": "4420" 00:16:51.217 }, 00:16:51.217 "peer_address": { 00:16:51.217 "trtype": "TCP", 00:16:51.217 "adrfam": "IPv4", 00:16:51.217 "traddr": "10.0.0.1", 00:16:51.217 "trsvcid": "33464" 00:16:51.217 }, 00:16:51.217 "auth": { 00:16:51.217 "state": "completed", 00:16:51.217 "digest": "sha384", 00:16:51.217 "dhgroup": "ffdhe8192" 00:16:51.217 } 00:16:51.217 } 00:16:51.217 ]' 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.217 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.475 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.475 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.475 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.475 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.475 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.733 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:52.678 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.937 11:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.194 00:16:53.194 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.194 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.194 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.451 { 00:16:53.451 "cntlid": 97, 00:16:53.451 "qid": 0, 00:16:53.451 "state": "enabled", 00:16:53.451 "thread": "nvmf_tgt_poll_group_000", 00:16:53.451 "listen_address": { 00:16:53.451 "trtype": "TCP", 00:16:53.451 "adrfam": "IPv4", 00:16:53.451 "traddr": "10.0.0.2", 00:16:53.451 "trsvcid": "4420" 00:16:53.451 }, 00:16:53.451 "peer_address": { 00:16:53.451 "trtype": "TCP", 00:16:53.451 "adrfam": "IPv4", 00:16:53.451 "traddr": "10.0.0.1", 00:16:53.451 "trsvcid": "46762" 00:16:53.451 }, 00:16:53.451 "auth": { 00:16:53.451 "state": "completed", 00:16:53.451 "digest": "sha512", 00:16:53.451 "dhgroup": "null" 00:16:53.451 } 00:16:53.451 } 00:16:53.451 ]' 00:16:53.451 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.709 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.966 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:54.898 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.156 11:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.414 00:16:55.414 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.414 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.414 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.671 { 00:16:55.671 "cntlid": 99, 00:16:55.671 "qid": 0, 00:16:55.671 "state": "enabled", 00:16:55.671 "thread": "nvmf_tgt_poll_group_000", 00:16:55.671 "listen_address": { 00:16:55.671 "trtype": "TCP", 00:16:55.671 "adrfam": "IPv4", 00:16:55.671 "traddr": "10.0.0.2", 00:16:55.671 "trsvcid": "4420" 00:16:55.671 }, 00:16:55.671 "peer_address": { 00:16:55.671 "trtype": "TCP", 00:16:55.671 "adrfam": "IPv4", 00:16:55.671 "traddr": "10.0.0.1", 00:16:55.671 "trsvcid": "46786" 00:16:55.671 }, 00:16:55.671 "auth": { 00:16:55.671 "state": "completed", 00:16:55.671 "digest": "sha512", 00:16:55.671 "dhgroup": "null" 00:16:55.671 } 00:16:55.671 } 00:16:55.671 ]' 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.671 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.928 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:55.928 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.928 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.928 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.928 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.186 11:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.119 11:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.377 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.635 00:16:57.635 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.635 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.635 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.893 { 00:16:57.893 "cntlid": 101, 00:16:57.893 "qid": 0, 00:16:57.893 "state": "enabled", 00:16:57.893 "thread": "nvmf_tgt_poll_group_000", 00:16:57.893 "listen_address": { 00:16:57.893 "trtype": "TCP", 00:16:57.893 "adrfam": "IPv4", 00:16:57.893 "traddr": "10.0.0.2", 00:16:57.893 "trsvcid": "4420" 00:16:57.893 }, 00:16:57.893 "peer_address": { 00:16:57.893 "trtype": "TCP", 00:16:57.893 "adrfam": "IPv4", 00:16:57.893 "traddr": "10.0.0.1", 00:16:57.893 "trsvcid": "46814" 00:16:57.893 }, 00:16:57.893 "auth": { 00:16:57.893 "state": "completed", 00:16:57.893 "digest": "sha512", 00:16:57.893 "dhgroup": "null" 00:16:57.893 } 00:16:57.893 } 00:16:57.893 ]' 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.893 11:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.151 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.084 11:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.341 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.907 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.907 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.165 { 00:17:00.165 "cntlid": 103, 00:17:00.165 "qid": 0, 00:17:00.165 "state": "enabled", 00:17:00.165 "thread": "nvmf_tgt_poll_group_000", 00:17:00.165 "listen_address": { 00:17:00.165 "trtype": "TCP", 00:17:00.165 "adrfam": "IPv4", 00:17:00.165 "traddr": "10.0.0.2", 00:17:00.165 "trsvcid": "4420" 00:17:00.165 }, 00:17:00.165 "peer_address": { 00:17:00.165 "trtype": "TCP", 00:17:00.165 "adrfam": "IPv4", 00:17:00.165 "traddr": "10.0.0.1", 00:17:00.165 "trsvcid": "46836" 00:17:00.165 }, 00:17:00.165 "auth": { 00:17:00.165 "state": "completed", 00:17:00.165 "digest": "sha512", 00:17:00.165 "dhgroup": "null" 00:17:00.165 } 00:17:00.165 } 00:17:00.165 ]' 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.165 11:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.165 11:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.165 11:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.165 11:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.423 11:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.351 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.608 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.865 00:17:01.865 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.865 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.865 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.122 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.122 11:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.122 { 00:17:02.122 "cntlid": 105, 00:17:02.122 "qid": 0, 00:17:02.122 "state": "enabled", 00:17:02.122 "thread": "nvmf_tgt_poll_group_000", 00:17:02.122 "listen_address": { 00:17:02.122 "trtype": "TCP", 00:17:02.122 "adrfam": "IPv4", 00:17:02.122 "traddr": "10.0.0.2", 00:17:02.122 "trsvcid": "4420" 00:17:02.122 }, 00:17:02.122 "peer_address": { 00:17:02.122 "trtype": "TCP", 00:17:02.122 "adrfam": "IPv4", 00:17:02.122 "traddr": "10.0.0.1", 00:17:02.122 "trsvcid": "46862" 00:17:02.122 }, 00:17:02.122 "auth": { 00:17:02.122 "state": "completed", 00:17:02.122 "digest": "sha512", 00:17:02.122 "dhgroup": "ffdhe2048" 00:17:02.122 } 00:17:02.122 } 00:17:02.122 ]' 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.122 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.404 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.404 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.404 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.404 11:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.337 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.595 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.852 00:17:03.852 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.852 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.852 11:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.110 { 00:17:04.110 "cntlid": 107, 00:17:04.110 "qid": 0, 00:17:04.110 "state": "enabled", 00:17:04.110 "thread": "nvmf_tgt_poll_group_000", 00:17:04.110 "listen_address": { 00:17:04.110 "trtype": "TCP", 00:17:04.110 "adrfam": "IPv4", 00:17:04.110 "traddr": "10.0.0.2", 00:17:04.110 "trsvcid": "4420" 00:17:04.110 }, 00:17:04.110 "peer_address": { 00:17:04.110 "trtype": "TCP", 00:17:04.110 "adrfam": "IPv4", 00:17:04.110 "traddr": "10.0.0.1", 00:17:04.110 "trsvcid": "47882" 00:17:04.110 }, 00:17:04.110 "auth": { 00:17:04.110 "state": "completed", 00:17:04.110 "digest": "sha512", 00:17:04.110 "dhgroup": "ffdhe2048" 00:17:04.110 } 00:17:04.110 } 00:17:04.110 ]' 00:17:04.110 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.367 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.623 11:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.553 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.810 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.068 00:17:06.068 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.068 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.068 11:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.325 { 00:17:06.325 "cntlid": 109, 00:17:06.325 "qid": 0, 00:17:06.325 "state": "enabled", 00:17:06.325 "thread": "nvmf_tgt_poll_group_000", 00:17:06.325 "listen_address": { 00:17:06.325 "trtype": "TCP", 00:17:06.325 "adrfam": "IPv4", 00:17:06.325 "traddr": "10.0.0.2", 00:17:06.325 "trsvcid": "4420" 00:17:06.325 }, 00:17:06.325 "peer_address": { 00:17:06.325 "trtype": "TCP", 00:17:06.325 "adrfam": "IPv4", 00:17:06.325 "traddr": "10.0.0.1", 00:17:06.325 "trsvcid": "47914" 00:17:06.325 }, 00:17:06.325 "auth": { 00:17:06.325 "state": "completed", 00:17:06.325 "digest": "sha512", 00:17:06.325 "dhgroup": "ffdhe2048" 00:17:06.325 } 00:17:06.325 } 00:17:06.325 ]' 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.325 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.890 11:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.821 11:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.078 00:17:08.336 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.336 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.336 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.594 { 00:17:08.594 "cntlid": 111, 00:17:08.594 "qid": 0, 00:17:08.594 "state": "enabled", 00:17:08.594 "thread": "nvmf_tgt_poll_group_000", 00:17:08.594 "listen_address": { 00:17:08.594 "trtype": "TCP", 00:17:08.594 "adrfam": "IPv4", 00:17:08.594 "traddr": "10.0.0.2", 00:17:08.594 "trsvcid": "4420" 00:17:08.594 }, 00:17:08.594 "peer_address": { 00:17:08.594 "trtype": "TCP", 00:17:08.594 "adrfam": "IPv4", 00:17:08.594 "traddr": "10.0.0.1", 00:17:08.594 "trsvcid": "47936" 00:17:08.594 }, 00:17:08.594 "auth": { 00:17:08.594 "state": "completed", 00:17:08.594 "digest": "sha512", 00:17:08.594 "dhgroup": "ffdhe2048" 00:17:08.594 } 00:17:08.594 } 00:17:08.594 ]' 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.594 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.852 11:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:09.784 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.784 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.785 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.042 11:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.300 00:17:10.300 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.300 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.300 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.557 { 00:17:10.557 "cntlid": 113, 00:17:10.557 "qid": 0, 00:17:10.557 "state": "enabled", 00:17:10.557 "thread": "nvmf_tgt_poll_group_000", 00:17:10.557 "listen_address": { 00:17:10.557 "trtype": "TCP", 00:17:10.557 "adrfam": "IPv4", 00:17:10.557 "traddr": "10.0.0.2", 00:17:10.557 "trsvcid": "4420" 00:17:10.557 }, 00:17:10.557 "peer_address": { 00:17:10.557 "trtype": "TCP", 00:17:10.557 "adrfam": "IPv4", 00:17:10.557 "traddr": "10.0.0.1", 00:17:10.557 "trsvcid": "47952" 00:17:10.557 }, 00:17:10.557 "auth": { 00:17:10.557 "state": "completed", 00:17:10.557 "digest": "sha512", 00:17:10.557 "dhgroup": "ffdhe3072" 00:17:10.557 } 00:17:10.557 } 00:17:10.557 ]' 00:17:10.557 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.815 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.072 11:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.006 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.263 11:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.263 11:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.263 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.264 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.522 00:17:12.522 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.522 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.522 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.780 { 00:17:12.780 "cntlid": 115, 00:17:12.780 "qid": 0, 00:17:12.780 "state": "enabled", 00:17:12.780 "thread": "nvmf_tgt_poll_group_000", 00:17:12.780 "listen_address": { 00:17:12.780 "trtype": "TCP", 00:17:12.780 "adrfam": "IPv4", 00:17:12.780 "traddr": "10.0.0.2", 00:17:12.780 "trsvcid": "4420" 00:17:12.780 }, 00:17:12.780 "peer_address": { 00:17:12.780 "trtype": "TCP", 00:17:12.780 "adrfam": "IPv4", 00:17:12.780 "traddr": "10.0.0.1", 00:17:12.780 "trsvcid": "47990" 00:17:12.780 }, 00:17:12.780 "auth": { 00:17:12.780 "state": "completed", 00:17:12.780 "digest": "sha512", 00:17:12.780 "dhgroup": "ffdhe3072" 00:17:12.780 } 00:17:12.780 } 00:17:12.780 ]' 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.780 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.038 11:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.972 11:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.229 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.230 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.487 00:17:14.745 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.745 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.745 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.003 { 00:17:15.003 "cntlid": 117, 00:17:15.003 "qid": 0, 00:17:15.003 "state": "enabled", 00:17:15.003 "thread": "nvmf_tgt_poll_group_000", 00:17:15.003 "listen_address": { 00:17:15.003 "trtype": "TCP", 00:17:15.003 "adrfam": "IPv4", 00:17:15.003 "traddr": "10.0.0.2", 00:17:15.003 "trsvcid": "4420" 00:17:15.003 }, 00:17:15.003 "peer_address": { 00:17:15.003 "trtype": "TCP", 00:17:15.003 "adrfam": "IPv4", 00:17:15.003 "traddr": "10.0.0.1", 00:17:15.003 "trsvcid": "44520" 00:17:15.003 }, 00:17:15.003 "auth": { 00:17:15.003 "state": "completed", 00:17:15.003 "digest": "sha512", 00:17:15.003 "dhgroup": "ffdhe3072" 00:17:15.003 } 00:17:15.003 } 00:17:15.003 ]' 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.003 11:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.262 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.193 11:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.451 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.709 00:17:16.709 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.709 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.709 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.967 { 00:17:16.967 "cntlid": 119, 00:17:16.967 "qid": 0, 00:17:16.967 "state": "enabled", 00:17:16.967 "thread": "nvmf_tgt_poll_group_000", 00:17:16.967 "listen_address": { 00:17:16.967 "trtype": "TCP", 00:17:16.967 "adrfam": "IPv4", 00:17:16.967 "traddr": "10.0.0.2", 00:17:16.967 "trsvcid": "4420" 00:17:16.967 }, 00:17:16.967 "peer_address": { 00:17:16.967 "trtype": "TCP", 00:17:16.967 "adrfam": "IPv4", 00:17:16.967 "traddr": "10.0.0.1", 00:17:16.967 "trsvcid": "44548" 00:17:16.967 }, 00:17:16.967 "auth": { 00:17:16.967 "state": "completed", 00:17:16.967 "digest": "sha512", 00:17:16.967 "dhgroup": "ffdhe3072" 00:17:16.967 } 00:17:16.967 } 00:17:16.967 ]' 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.967 11:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.224 11:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.156 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.413 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.976 00:17:18.976 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.976 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.976 11:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.233 { 00:17:19.233 "cntlid": 121, 00:17:19.233 "qid": 0, 00:17:19.233 "state": "enabled", 00:17:19.233 "thread": "nvmf_tgt_poll_group_000", 00:17:19.233 "listen_address": { 00:17:19.233 "trtype": "TCP", 00:17:19.233 "adrfam": "IPv4", 00:17:19.233 "traddr": "10.0.0.2", 00:17:19.233 "trsvcid": "4420" 00:17:19.233 }, 00:17:19.233 "peer_address": { 00:17:19.233 "trtype": "TCP", 00:17:19.233 "adrfam": "IPv4", 00:17:19.233 "traddr": "10.0.0.1", 00:17:19.233 "trsvcid": "44570" 00:17:19.233 }, 00:17:19.233 "auth": { 00:17:19.233 "state": "completed", 00:17:19.233 "digest": "sha512", 00:17:19.233 "dhgroup": "ffdhe4096" 00:17:19.233 } 00:17:19.233 } 00:17:19.233 ]' 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.233 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.491 11:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:20.423 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.424 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.682 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.940 11:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.940 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.940 11:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.197 00:17:21.197 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.197 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.197 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.455 { 00:17:21.455 "cntlid": 123, 00:17:21.455 "qid": 0, 00:17:21.455 "state": "enabled", 00:17:21.455 "thread": "nvmf_tgt_poll_group_000", 00:17:21.455 "listen_address": { 00:17:21.455 "trtype": "TCP", 00:17:21.455 "adrfam": "IPv4", 00:17:21.455 "traddr": "10.0.0.2", 00:17:21.455 "trsvcid": "4420" 00:17:21.455 }, 00:17:21.455 "peer_address": { 00:17:21.455 "trtype": "TCP", 00:17:21.455 "adrfam": "IPv4", 00:17:21.455 "traddr": "10.0.0.1", 00:17:21.455 "trsvcid": "44588" 00:17:21.455 }, 00:17:21.455 "auth": { 00:17:21.455 "state": "completed", 00:17:21.455 "digest": "sha512", 00:17:21.455 "dhgroup": "ffdhe4096" 00:17:21.455 } 00:17:21.455 } 00:17:21.455 ]' 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.455 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.713 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.713 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.713 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.971 11:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.904 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.162 11:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.420 00:17:23.420 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.420 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.420 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.678 { 00:17:23.678 "cntlid": 125, 00:17:23.678 "qid": 0, 00:17:23.678 "state": "enabled", 00:17:23.678 "thread": "nvmf_tgt_poll_group_000", 00:17:23.678 "listen_address": { 00:17:23.678 "trtype": "TCP", 00:17:23.678 "adrfam": "IPv4", 00:17:23.678 "traddr": "10.0.0.2", 00:17:23.678 "trsvcid": "4420" 00:17:23.678 }, 00:17:23.678 "peer_address": { 00:17:23.678 "trtype": "TCP", 00:17:23.678 "adrfam": "IPv4", 00:17:23.678 "traddr": "10.0.0.1", 00:17:23.678 "trsvcid": "53996" 00:17:23.678 }, 00:17:23.678 "auth": { 00:17:23.678 "state": "completed", 00:17:23.678 "digest": "sha512", 00:17:23.678 "dhgroup": "ffdhe4096" 00:17:23.678 } 00:17:23.678 } 00:17:23.678 ]' 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.678 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.936 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.936 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.936 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.194 11:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.163 11:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.445 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.703 00:17:25.703 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.703 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.703 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.961 { 00:17:25.961 "cntlid": 127, 00:17:25.961 "qid": 0, 00:17:25.961 "state": "enabled", 00:17:25.961 "thread": "nvmf_tgt_poll_group_000", 00:17:25.961 "listen_address": { 00:17:25.961 "trtype": "TCP", 00:17:25.961 "adrfam": "IPv4", 00:17:25.961 "traddr": "10.0.0.2", 00:17:25.961 "trsvcid": "4420" 00:17:25.961 }, 00:17:25.961 "peer_address": { 00:17:25.961 "trtype": "TCP", 00:17:25.961 "adrfam": "IPv4", 00:17:25.961 "traddr": "10.0.0.1", 00:17:25.961 "trsvcid": "54022" 00:17:25.961 }, 00:17:25.961 "auth": { 00:17:25.961 "state": "completed", 00:17:25.961 "digest": "sha512", 00:17:25.961 "dhgroup": "ffdhe4096" 00:17:25.961 } 00:17:25.961 } 00:17:25.961 ]' 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.961 11:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.219 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.151 11:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.409 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.974 00:17:27.974 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.974 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.974 11:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.232 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.232 { 00:17:28.232 "cntlid": 129, 00:17:28.232 "qid": 0, 00:17:28.232 "state": "enabled", 00:17:28.232 "thread": "nvmf_tgt_poll_group_000", 00:17:28.232 "listen_address": { 00:17:28.232 "trtype": "TCP", 00:17:28.232 "adrfam": "IPv4", 00:17:28.232 "traddr": "10.0.0.2", 00:17:28.232 "trsvcid": "4420" 00:17:28.232 }, 00:17:28.232 "peer_address": { 00:17:28.232 "trtype": "TCP", 00:17:28.233 "adrfam": "IPv4", 00:17:28.233 "traddr": "10.0.0.1", 00:17:28.233 "trsvcid": "54050" 00:17:28.233 }, 00:17:28.233 "auth": { 00:17:28.233 "state": "completed", 00:17:28.233 "digest": "sha512", 00:17:28.233 "dhgroup": "ffdhe6144" 00:17:28.233 } 00:17:28.233 } 00:17:28.233 ]' 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.233 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.491 11:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.424 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.681 11:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.245 00:17:30.245 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.245 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.245 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.502 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.502 { 00:17:30.502 "cntlid": 131, 00:17:30.502 "qid": 0, 00:17:30.502 "state": "enabled", 00:17:30.502 "thread": "nvmf_tgt_poll_group_000", 00:17:30.502 "listen_address": { 00:17:30.502 "trtype": "TCP", 00:17:30.502 "adrfam": "IPv4", 00:17:30.502 "traddr": "10.0.0.2", 00:17:30.502 "trsvcid": "4420" 00:17:30.502 }, 00:17:30.502 "peer_address": { 00:17:30.502 "trtype": "TCP", 00:17:30.502 "adrfam": "IPv4", 00:17:30.502 "traddr": "10.0.0.1", 00:17:30.502 "trsvcid": "54072" 00:17:30.503 }, 00:17:30.503 "auth": { 00:17:30.503 "state": "completed", 00:17:30.503 "digest": "sha512", 00:17:30.503 "dhgroup": "ffdhe6144" 00:17:30.503 } 00:17:30.503 } 00:17:30.503 ]' 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.503 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.761 11:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.694 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.951 11:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.517 00:17:32.517 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.517 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.517 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.775 { 00:17:32.775 "cntlid": 133, 00:17:32.775 "qid": 0, 00:17:32.775 "state": "enabled", 00:17:32.775 "thread": "nvmf_tgt_poll_group_000", 00:17:32.775 "listen_address": { 00:17:32.775 "trtype": "TCP", 00:17:32.775 "adrfam": "IPv4", 00:17:32.775 "traddr": "10.0.0.2", 00:17:32.775 "trsvcid": "4420" 00:17:32.775 }, 00:17:32.775 "peer_address": { 00:17:32.775 "trtype": "TCP", 00:17:32.775 "adrfam": "IPv4", 00:17:32.775 "traddr": "10.0.0.1", 00:17:32.775 "trsvcid": "54106" 00:17:32.775 }, 00:17:32.775 "auth": { 00:17:32.775 "state": "completed", 00:17:32.775 "digest": "sha512", 00:17:32.775 "dhgroup": "ffdhe6144" 00:17:32.775 } 00:17:32.775 } 00:17:32.775 ]' 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.775 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.032 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.032 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.032 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.032 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.032 11:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.289 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.220 11:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.477 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.041 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.041 11:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.041 11:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.041 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.041 { 00:17:35.041 "cntlid": 135, 00:17:35.041 "qid": 0, 00:17:35.041 "state": "enabled", 00:17:35.041 "thread": "nvmf_tgt_poll_group_000", 00:17:35.041 "listen_address": { 00:17:35.041 "trtype": "TCP", 00:17:35.041 "adrfam": "IPv4", 00:17:35.041 "traddr": "10.0.0.2", 00:17:35.041 "trsvcid": "4420" 00:17:35.041 }, 00:17:35.041 "peer_address": { 00:17:35.041 "trtype": "TCP", 00:17:35.041 "adrfam": "IPv4", 00:17:35.041 "traddr": "10.0.0.1", 00:17:35.041 "trsvcid": "54664" 00:17:35.041 }, 00:17:35.041 "auth": { 00:17:35.041 "state": "completed", 00:17:35.041 "digest": "sha512", 00:17:35.041 "dhgroup": "ffdhe6144" 00:17:35.041 } 00:17:35.041 } 00:17:35.041 ]' 00:17:35.041 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.298 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.554 11:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.484 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.740 11:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.671 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.671 { 00:17:37.671 "cntlid": 137, 00:17:37.671 "qid": 0, 00:17:37.671 "state": "enabled", 00:17:37.671 "thread": "nvmf_tgt_poll_group_000", 00:17:37.671 "listen_address": { 00:17:37.671 "trtype": "TCP", 00:17:37.671 "adrfam": "IPv4", 00:17:37.671 "traddr": "10.0.0.2", 00:17:37.671 "trsvcid": "4420" 00:17:37.671 }, 00:17:37.671 "peer_address": { 00:17:37.671 "trtype": "TCP", 00:17:37.671 "adrfam": "IPv4", 00:17:37.671 "traddr": "10.0.0.1", 00:17:37.671 "trsvcid": "54690" 00:17:37.671 }, 00:17:37.671 "auth": { 00:17:37.671 "state": "completed", 00:17:37.671 "digest": "sha512", 00:17:37.671 "dhgroup": "ffdhe8192" 00:17:37.671 } 00:17:37.671 } 00:17:37.671 ]' 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.671 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.928 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.928 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.928 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.928 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.928 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.189 11:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.122 11:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.380 11:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.310 00:17:40.310 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.310 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.310 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.568 { 00:17:40.568 "cntlid": 139, 00:17:40.568 "qid": 0, 00:17:40.568 "state": "enabled", 00:17:40.568 "thread": "nvmf_tgt_poll_group_000", 00:17:40.568 "listen_address": { 00:17:40.568 "trtype": "TCP", 00:17:40.568 "adrfam": "IPv4", 00:17:40.568 "traddr": "10.0.0.2", 00:17:40.568 "trsvcid": "4420" 00:17:40.568 }, 00:17:40.568 "peer_address": { 00:17:40.568 "trtype": "TCP", 00:17:40.568 "adrfam": "IPv4", 00:17:40.568 "traddr": "10.0.0.1", 00:17:40.568 "trsvcid": "54716" 00:17:40.568 }, 00:17:40.568 "auth": { 00:17:40.568 "state": "completed", 00:17:40.568 "digest": "sha512", 00:17:40.568 "dhgroup": "ffdhe8192" 00:17:40.568 } 00:17:40.568 } 00:17:40.568 ]' 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.568 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.825 11:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Zjg3YjNmMzE1YjY0ZWUwMjc4NTMwZWU3YTdiOGQzNGWxb95m: --dhchap-ctrl-secret DHHC-1:02:YmJlMWFkNDhiNWIwZjlkNWI1OTA2OTIwMWFlODU3ZGY0YjhjMzBkMWViOTRjODAzbz4Cqg==: 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.758 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.017 11:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.950 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.950 { 00:17:42.950 "cntlid": 141, 00:17:42.950 "qid": 0, 00:17:42.950 "state": "enabled", 00:17:42.950 "thread": "nvmf_tgt_poll_group_000", 00:17:42.950 "listen_address": { 00:17:42.950 "trtype": "TCP", 00:17:42.950 "adrfam": "IPv4", 00:17:42.950 "traddr": "10.0.0.2", 00:17:42.950 "trsvcid": "4420" 00:17:42.950 }, 00:17:42.950 "peer_address": { 00:17:42.950 "trtype": "TCP", 00:17:42.950 "adrfam": "IPv4", 00:17:42.950 "traddr": "10.0.0.1", 00:17:42.950 "trsvcid": "54732" 00:17:42.950 }, 00:17:42.950 "auth": { 00:17:42.950 "state": "completed", 00:17:42.950 "digest": "sha512", 00:17:42.950 "dhgroup": "ffdhe8192" 00:17:42.950 } 00:17:42.950 } 00:17:42.950 ]' 00:17:42.950 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.208 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.208 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.208 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.208 11:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.208 11:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.208 11:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.208 11:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.465 11:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:Mzc2NGRjYjBlZDkyMTEyYjQ0ODNlMmQwZWY0NzMwYjY4MGM4ODE3NTgwMGQ2ZmFhM9TqZw==: --dhchap-ctrl-secret DHHC-1:01:NjU0NGMzMTcwY2U1ODRjZGVlMmE3ZGQ4ZDVhMmU5ZjWUvj6s: 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.396 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.653 11:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.586 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.586 { 00:17:45.586 "cntlid": 143, 00:17:45.586 "qid": 0, 00:17:45.586 "state": "enabled", 00:17:45.586 "thread": "nvmf_tgt_poll_group_000", 00:17:45.586 "listen_address": { 00:17:45.586 "trtype": "TCP", 00:17:45.586 "adrfam": "IPv4", 00:17:45.586 "traddr": "10.0.0.2", 00:17:45.586 "trsvcid": "4420" 00:17:45.586 }, 00:17:45.586 "peer_address": { 00:17:45.586 "trtype": "TCP", 00:17:45.586 "adrfam": "IPv4", 00:17:45.586 "traddr": "10.0.0.1", 00:17:45.586 "trsvcid": "48730" 00:17:45.586 }, 00:17:45.586 "auth": { 00:17:45.586 "state": "completed", 00:17:45.586 "digest": "sha512", 00:17:45.586 "dhgroup": "ffdhe8192" 00:17:45.586 } 00:17:45.586 } 00:17:45.586 ]' 00:17:45.586 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.843 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.100 11:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:47.033 11:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.290 11:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.291 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.291 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.247 00:17:48.247 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.247 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.247 11:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.247 { 00:17:48.247 "cntlid": 145, 00:17:48.247 "qid": 0, 00:17:48.247 "state": "enabled", 00:17:48.247 "thread": "nvmf_tgt_poll_group_000", 00:17:48.247 "listen_address": { 00:17:48.247 "trtype": "TCP", 00:17:48.247 "adrfam": "IPv4", 00:17:48.247 "traddr": "10.0.0.2", 00:17:48.247 "trsvcid": "4420" 00:17:48.247 }, 00:17:48.247 "peer_address": { 00:17:48.247 "trtype": "TCP", 00:17:48.247 "adrfam": "IPv4", 00:17:48.247 "traddr": "10.0.0.1", 00:17:48.247 "trsvcid": "48758" 00:17:48.247 }, 00:17:48.247 "auth": { 00:17:48.247 "state": "completed", 00:17:48.247 "digest": "sha512", 00:17:48.247 "dhgroup": "ffdhe8192" 00:17:48.247 } 00:17:48.247 } 00:17:48.247 ]' 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.247 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.506 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.506 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.506 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.506 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.506 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.763 11:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:YzgyODJkZWE0NjVhNmU1NDQyZmE4Mjk5NmNlNTExZGVhMTdjMzg3ZmJiN2Y3ZWQ1lWhkTQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc2NDIzYjUyOWE3YzkyN2VlNWRlMjIzYmJhOTIwZWM4OGY5NmE2YTBlMTMwZWY5YTU2MDUyMjE2NzA1YjhiMb14ErI=: 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:49.694 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.695 11:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:49.695 11:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:50.259 request: 00:17:50.259 { 00:17:50.259 "name": "nvme0", 00:17:50.259 "trtype": "tcp", 00:17:50.259 "traddr": "10.0.0.2", 00:17:50.259 "adrfam": "ipv4", 00:17:50.259 "trsvcid": "4420", 00:17:50.259 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:50.259 "prchk_reftag": false, 00:17:50.259 "prchk_guard": false, 00:17:50.259 "hdgst": false, 00:17:50.259 "ddgst": false, 00:17:50.259 "dhchap_key": "key2", 00:17:50.259 "method": "bdev_nvme_attach_controller", 00:17:50.259 "req_id": 1 00:17:50.259 } 00:17:50.259 Got JSON-RPC error response 00:17:50.259 response: 00:17:50.259 { 00:17:50.259 "code": -5, 00:17:50.259 "message": "Input/output error" 00:17:50.259 } 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.259 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:50.260 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:51.190 request: 00:17:51.190 { 00:17:51.190 "name": "nvme0", 00:17:51.190 "trtype": "tcp", 00:17:51.190 "traddr": "10.0.0.2", 00:17:51.190 "adrfam": "ipv4", 00:17:51.190 "trsvcid": "4420", 00:17:51.190 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:51.190 "prchk_reftag": false, 00:17:51.190 "prchk_guard": false, 00:17:51.190 "hdgst": false, 00:17:51.190 "ddgst": false, 00:17:51.190 "dhchap_key": "key1", 00:17:51.190 "dhchap_ctrlr_key": "ckey2", 00:17:51.190 "method": "bdev_nvme_attach_controller", 00:17:51.190 "req_id": 1 00:17:51.190 } 00:17:51.190 Got JSON-RPC error response 00:17:51.190 response: 00:17:51.190 { 00:17:51.190 "code": -5, 00:17:51.190 "message": "Input/output error" 00:17:51.190 } 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.190 11:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.755 request: 00:17:51.755 { 00:17:51.755 "name": "nvme0", 00:17:51.755 "trtype": "tcp", 00:17:51.755 "traddr": "10.0.0.2", 00:17:51.755 "adrfam": "ipv4", 00:17:51.755 "trsvcid": "4420", 00:17:51.755 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:51.755 "prchk_reftag": false, 00:17:51.755 "prchk_guard": false, 00:17:51.755 "hdgst": false, 00:17:51.755 "ddgst": false, 00:17:51.755 "dhchap_key": "key1", 00:17:51.755 "dhchap_ctrlr_key": "ckey1", 00:17:51.755 "method": "bdev_nvme_attach_controller", 00:17:51.755 "req_id": 1 00:17:51.755 } 00:17:51.755 Got JSON-RPC error response 00:17:51.755 response: 00:17:51.755 { 00:17:51.755 "code": -5, 00:17:51.755 "message": "Input/output error" 00:17:51.755 } 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3017615 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3017615 ']' 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3017615 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017615 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017615' 00:17:52.011 killing process with pid 3017615 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3017615 00:17:52.011 11:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3017615 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3039494 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3039494 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3039494 ']' 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.276 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3039494 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3039494 ']' 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.533 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.790 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.047 11:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.047 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.047 11:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.977 00:17:53.977 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.977 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.977 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.977 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.977 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.978 { 00:17:53.978 "cntlid": 1, 00:17:53.978 "qid": 0, 00:17:53.978 "state": "enabled", 00:17:53.978 "thread": "nvmf_tgt_poll_group_000", 00:17:53.978 "listen_address": { 00:17:53.978 "trtype": "TCP", 00:17:53.978 "adrfam": "IPv4", 00:17:53.978 "traddr": "10.0.0.2", 00:17:53.978 "trsvcid": "4420" 00:17:53.978 }, 00:17:53.978 "peer_address": { 00:17:53.978 "trtype": "TCP", 00:17:53.978 "adrfam": "IPv4", 00:17:53.978 "traddr": "10.0.0.1", 00:17:53.978 "trsvcid": "51360" 00:17:53.978 }, 00:17:53.978 "auth": { 00:17:53.978 "state": "completed", 00:17:53.978 "digest": "sha512", 00:17:53.978 "dhgroup": "ffdhe8192" 00:17:53.978 } 00:17:53.978 } 00:17:53.978 ]' 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.978 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.235 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.235 11:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.235 11:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.235 11:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.235 11:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.492 11:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MTg3ZDIyYjc4NWVhODU3ZGI3MDEyYTdlOGEwODE2YTRlNWM1ZjRmYmMxMmFlOThmNzFjZDhjNGZhNzMyNDY3MLvIjkI=: 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:55.423 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.680 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.937 request: 00:17:55.937 { 00:17:55.937 "name": "nvme0", 00:17:55.937 "trtype": "tcp", 00:17:55.937 "traddr": "10.0.0.2", 00:17:55.937 "adrfam": "ipv4", 00:17:55.937 "trsvcid": "4420", 00:17:55.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:55.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:55.937 "prchk_reftag": false, 00:17:55.937 "prchk_guard": false, 00:17:55.937 "hdgst": false, 00:17:55.937 "ddgst": false, 00:17:55.937 "dhchap_key": "key3", 00:17:55.937 "method": "bdev_nvme_attach_controller", 00:17:55.937 "req_id": 1 00:17:55.937 } 00:17:55.937 Got JSON-RPC error response 00:17:55.937 response: 00:17:55.937 { 00:17:55.937 "code": -5, 00:17:55.937 "message": "Input/output error" 00:17:55.937 } 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:55.937 11:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.195 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.453 request: 00:17:56.453 { 00:17:56.453 "name": "nvme0", 00:17:56.453 "trtype": "tcp", 00:17:56.453 "traddr": "10.0.0.2", 00:17:56.453 "adrfam": "ipv4", 00:17:56.453 "trsvcid": "4420", 00:17:56.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:56.453 "prchk_reftag": false, 00:17:56.453 "prchk_guard": false, 00:17:56.453 "hdgst": false, 00:17:56.453 "ddgst": false, 00:17:56.453 "dhchap_key": "key3", 00:17:56.453 "method": "bdev_nvme_attach_controller", 00:17:56.453 "req_id": 1 00:17:56.453 } 00:17:56.453 Got JSON-RPC error response 00:17:56.453 response: 00:17:56.453 { 00:17:56.453 "code": -5, 00:17:56.453 "message": "Input/output error" 00:17:56.453 } 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.453 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.711 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.969 request: 00:17:56.969 { 00:17:56.969 "name": "nvme0", 00:17:56.969 "trtype": "tcp", 00:17:56.969 "traddr": "10.0.0.2", 00:17:56.969 "adrfam": "ipv4", 00:17:56.969 "trsvcid": "4420", 00:17:56.969 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:56.969 "prchk_reftag": false, 00:17:56.969 "prchk_guard": false, 00:17:56.969 "hdgst": false, 00:17:56.969 "ddgst": false, 00:17:56.969 "dhchap_key": "key0", 00:17:56.969 "dhchap_ctrlr_key": "key1", 00:17:56.969 "method": "bdev_nvme_attach_controller", 00:17:56.969 "req_id": 1 00:17:56.969 } 00:17:56.969 Got JSON-RPC error response 00:17:56.969 response: 00:17:56.969 { 00:17:56.969 "code": -5, 00:17:56.969 "message": "Input/output error" 00:17:56.969 } 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:56.969 11:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.534 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.534 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.791 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:57.791 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:57.791 11:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3017756 00:17:57.791 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3017756 ']' 00:17:57.791 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3017756 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017756 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017756' 00:17:58.048 killing process with pid 3017756 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3017756 00:17:58.048 11:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3017756 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:58.305 rmmod nvme_tcp 00:17:58.305 rmmod nvme_fabrics 00:17:58.305 rmmod nvme_keyring 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3039494 ']' 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3039494 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3039494 ']' 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3039494 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3039494 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3039494' 00:17:58.305 killing process with pid 3039494 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3039494 00:17:58.305 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3039494 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.872 11:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.775 11:44:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.775 11:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.AiE /tmp/spdk.key-sha256.YME /tmp/spdk.key-sha384.KuH /tmp/spdk.key-sha512.OT6 /tmp/spdk.key-sha512.2sI /tmp/spdk.key-sha384.7Eu /tmp/spdk.key-sha256.K1M '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:00.775 00:18:00.775 real 3m1.730s 00:18:00.775 user 7m4.630s 00:18:00.775 sys 0m25.430s 00:18:00.775 11:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.775 11:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 ************************************ 00:18:00.775 END TEST nvmf_auth_target 00:18:00.775 ************************************ 00:18:00.775 11:44:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:00.775 11:44:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:00.775 11:44:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.775 11:44:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:00.775 11:44:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.775 11:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 ************************************ 00:18:00.775 START TEST nvmf_bdevio_no_huge 00:18:00.775 ************************************ 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.775 * Looking for test storage... 00:18:00.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.775 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:00.776 11:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.308 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:03.309 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:03.309 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:03.309 Found net devices under 0000:84:00.0: cvl_0_0 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:03.309 Found net devices under 0000:84:00.1: cvl_0_1 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:18:03.309 00:18:03.309 --- 10.0.0.2 ping statistics --- 00:18:03.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.309 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:03.309 00:18:03.309 --- 10.0.0.1 ping statistics --- 00:18:03.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.309 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.309 11:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3042294 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3042294 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3042294 ']' 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.309 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.309 [2024-07-15 11:44:11.058536] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:03.309 [2024-07-15 11:44:11.058638] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:03.309 [2024-07-15 11:44:11.128911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.309 [2024-07-15 11:44:11.226349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.309 [2024-07-15 11:44:11.226423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.309 [2024-07-15 11:44:11.226446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.309 [2024-07-15 11:44:11.226457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.309 [2024-07-15 11:44:11.226466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.309 [2024-07-15 11:44:11.226605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:03.309 [2024-07-15 11:44:11.226669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:03.309 [2024-07-15 11:44:11.226746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.309 [2024-07-15 11:44:11.226746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 [2024-07-15 11:44:11.355964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 Malloc0 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 [2024-07-15 11:44:11.394155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.568 { 00:18:03.568 "params": { 00:18:03.568 "name": "Nvme$subsystem", 00:18:03.568 "trtype": "$TEST_TRANSPORT", 00:18:03.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.568 "adrfam": "ipv4", 00:18:03.568 "trsvcid": "$NVMF_PORT", 00:18:03.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.568 "hdgst": ${hdgst:-false}, 00:18:03.568 "ddgst": ${ddgst:-false} 00:18:03.568 }, 00:18:03.568 "method": "bdev_nvme_attach_controller" 00:18:03.568 } 00:18:03.568 EOF 00:18:03.568 )") 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:03.568 11:44:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.568 "params": { 00:18:03.568 "name": "Nvme1", 00:18:03.568 "trtype": "tcp", 00:18:03.568 "traddr": "10.0.0.2", 00:18:03.568 "adrfam": "ipv4", 00:18:03.568 "trsvcid": "4420", 00:18:03.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.568 "hdgst": false, 00:18:03.568 "ddgst": false 00:18:03.568 }, 00:18:03.568 "method": "bdev_nvme_attach_controller" 00:18:03.568 }' 00:18:03.568 [2024-07-15 11:44:11.441569] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:03.568 [2024-07-15 11:44:11.441652] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3042320 ] 00:18:03.568 [2024-07-15 11:44:11.508091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.826 [2024-07-15 11:44:11.624785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.826 [2024-07-15 11:44:11.624825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.826 [2024-07-15 11:44:11.624829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.083 I/O targets: 00:18:04.083 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:04.083 00:18:04.083 00:18:04.083 CUnit - A unit testing framework for C - Version 2.1-3 00:18:04.083 http://cunit.sourceforge.net/ 00:18:04.083 00:18:04.083 00:18:04.083 Suite: bdevio tests on: Nvme1n1 00:18:04.083 Test: blockdev write read block ...passed 00:18:04.083 Test: blockdev write zeroes read block ...passed 00:18:04.083 Test: blockdev write zeroes read no split ...passed 00:18:04.083 Test: blockdev write zeroes read split ...passed 00:18:04.341 Test: blockdev write zeroes read split partial ...passed 00:18:04.341 Test: blockdev reset ...[2024-07-15 11:44:12.073127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:04.341 [2024-07-15 11:44:12.073247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171670 (9): Bad file descriptor 00:18:04.341 [2024-07-15 11:44:12.088542] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:04.341 passed 00:18:04.341 Test: blockdev write read 8 blocks ...passed 00:18:04.341 Test: blockdev write read size > 128k ...passed 00:18:04.341 Test: blockdev write read invalid size ...passed 00:18:04.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:04.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:04.341 Test: blockdev write read max offset ...passed 00:18:04.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:04.341 Test: blockdev writev readv 8 blocks ...passed 00:18:04.341 Test: blockdev writev readv 30 x 1block ...passed 00:18:04.599 Test: blockdev writev readv block ...passed 00:18:04.599 Test: blockdev writev readv size > 128k ...passed 00:18:04.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:04.599 Test: blockdev comparev and writev ...[2024-07-15 11:44:12.341757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.341793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.341823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.341840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.342367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.342399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.342420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.342435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.342888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.342923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.342945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.342961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.343439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.343463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.343491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:04.599 [2024-07-15 11:44:12.343507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:04.599 passed 00:18:04.599 Test: blockdev nvme passthru rw ...passed 00:18:04.599 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:44:12.426075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.599 [2024-07-15 11:44:12.426103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.426258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.599 [2024-07-15 11:44:12.426282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.426423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.599 [2024-07-15 11:44:12.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:04.599 [2024-07-15 11:44:12.426607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:04.599 [2024-07-15 11:44:12.426631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:04.599 passed 00:18:04.599 Test: blockdev nvme admin passthru ...passed 00:18:04.599 Test: blockdev copy ...passed 00:18:04.599 00:18:04.599 Run Summary: Type Total Ran Passed Failed Inactive 00:18:04.600 suites 1 1 n/a 0 0 00:18:04.600 tests 23 23 23 0 0 00:18:04.600 asserts 152 152 152 0 n/a 00:18:04.600 00:18:04.600 Elapsed time = 1.153 seconds 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:05.165 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.166 rmmod nvme_tcp 00:18:05.166 rmmod nvme_fabrics 00:18:05.166 rmmod nvme_keyring 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3042294 ']' 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3042294 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3042294 ']' 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3042294 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3042294 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3042294' 00:18:05.166 killing process with pid 3042294 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3042294 00:18:05.166 11:44:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3042294 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.428 11:44:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.974 11:44:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.974 00:18:07.974 real 0m6.742s 00:18:07.974 user 0m11.313s 00:18:07.974 sys 0m2.604s 00:18:07.974 11:44:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.974 11:44:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.974 ************************************ 00:18:07.975 END TEST nvmf_bdevio_no_huge 00:18:07.975 ************************************ 00:18:07.975 11:44:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:07.975 11:44:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:07.975 11:44:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:07.975 11:44:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.975 11:44:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 ************************************ 00:18:07.975 START TEST nvmf_tls 00:18:07.975 ************************************ 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:07.975 * Looking for test storage... 00:18:07.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.975 11:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:09.875 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:09.875 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:09.875 Found net devices under 0000:84:00.0: cvl_0_0 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:09.875 Found net devices under 0000:84:00.1: cvl_0_1 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:09.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:09.875 00:18:09.875 --- 10.0.0.2 ping statistics --- 00:18:09.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.875 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:09.875 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:18:09.875 00:18:09.875 --- 10.0.0.1 ping statistics --- 00:18:09.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.876 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3044529 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3044529 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3044529 ']' 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.876 11:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.876 [2024-07-15 11:44:17.821716] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:09.876 [2024-07-15 11:44:17.821810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.876 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.133 [2024-07-15 11:44:17.886297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.133 [2024-07-15 11:44:17.986163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.133 [2024-07-15 11:44:17.986221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.133 [2024-07-15 11:44:17.986246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.133 [2024-07-15 11:44:17.986257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.133 [2024-07-15 11:44:17.986266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.133 [2024-07-15 11:44:17.986309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:10.133 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:10.390 true 00:18:10.390 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:10.390 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.647 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:10.647 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:10.647 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:10.904 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.904 11:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:11.162 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:11.162 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:11.162 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:11.419 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.419 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:11.677 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:11.677 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:11.677 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.677 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:11.934 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:11.934 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:11.934 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:12.191 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:12.191 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:12.449 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:12.449 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:12.449 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:12.708 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:12.708 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.sIinpP6axv 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GhPFrzv0gs 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sIinpP6axv 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GhPFrzv0gs 00:18:13.007 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:13.284 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:13.542 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sIinpP6axv 00:18:13.542 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sIinpP6axv 00:18:13.542 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.800 [2024-07-15 11:44:21.748799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.800 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.365 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.365 [2024-07-15 11:44:22.322434] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.365 [2024-07-15 11:44:22.322650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.365 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.623 malloc0 00:18:14.623 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.882 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sIinpP6axv 00:18:15.140 [2024-07-15 11:44:23.067878] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:15.140 11:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sIinpP6axv 00:18:15.140 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.338 Initializing NVMe Controllers 00:18:27.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.338 Initialization complete. Launching workers. 00:18:27.338 ======================================================== 00:18:27.338 Latency(us) 00:18:27.338 Device Information : IOPS MiB/s Average min max 00:18:27.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8800.69 34.38 7274.18 1141.40 9237.49 00:18:27.338 ======================================================== 00:18:27.338 Total : 8800.69 34.38 7274.18 1141.40 9237.49 00:18:27.338 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sIinpP6axv 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sIinpP6axv' 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3046311 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3046311 /var/tmp/bdevperf.sock 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3046311 ']' 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.338 [2024-07-15 11:44:33.224960] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:27.338 [2024-07-15 11:44:33.225068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046311 ] 00:18:27.338 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.338 [2024-07-15 11:44:33.290980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.338 [2024-07-15 11:44:33.397512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sIinpP6axv 00:18:27.338 [2024-07-15 11:44:33.717800] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.338 [2024-07-15 11:44:33.717915] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:27.338 TLSTESTn1 00:18:27.338 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:27.338 Running I/O for 10 seconds... 00:18:37.300 00:18:37.300 Latency(us) 00:18:37.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.300 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:37.300 Verification LBA range: start 0x0 length 0x2000 00:18:37.300 TLSTESTn1 : 10.03 3592.68 14.03 0.00 0.00 35558.61 8641.04 55147.33 00:18:37.300 =================================================================================================================== 00:18:37.300 Total : 3592.68 14.03 0.00 0.00 35558.61 8641.04 55147.33 00:18:37.300 0 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3046311 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3046311 ']' 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3046311 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.300 11:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3046311 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3046311' 00:18:37.300 killing process with pid 3046311 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3046311 00:18:37.300 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.300 00:18:37.300 Latency(us) 00:18:37.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.300 =================================================================================================================== 00:18:37.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.300 [2024-07-15 11:44:44.015485] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3046311 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GhPFrzv0gs 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GhPFrzv0gs 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GhPFrzv0gs 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GhPFrzv0gs' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3047615 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3047615 /var/tmp/bdevperf.sock 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3047615 ']' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.300 [2024-07-15 11:44:44.330480] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:37.300 [2024-07-15 11:44:44.330570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047615 ] 00:18:37.300 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.300 [2024-07-15 11:44:44.388212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.300 [2024-07-15 11:44:44.491034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GhPFrzv0gs 00:18:37.300 [2024-07-15 11:44:44.821156] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.300 [2024-07-15 11:44:44.821278] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:37.300 [2024-07-15 11:44:44.828761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.300 [2024-07-15 11:44:44.829238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86f6d0 (107): Transport endpoint is not connected 00:18:37.300 [2024-07-15 11:44:44.830227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86f6d0 (9): Bad file descriptor 00:18:37.300 [2024-07-15 11:44:44.831227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:37.300 [2024-07-15 11:44:44.831247] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.300 [2024-07-15 11:44:44.831264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.300 request: 00:18:37.300 { 00:18:37.300 "name": "TLSTEST", 00:18:37.300 "trtype": "tcp", 00:18:37.300 "traddr": "10.0.0.2", 00:18:37.300 "adrfam": "ipv4", 00:18:37.300 "trsvcid": "4420", 00:18:37.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.300 "prchk_reftag": false, 00:18:37.300 "prchk_guard": false, 00:18:37.300 "hdgst": false, 00:18:37.300 "ddgst": false, 00:18:37.300 "psk": "/tmp/tmp.GhPFrzv0gs", 00:18:37.300 "method": "bdev_nvme_attach_controller", 00:18:37.300 "req_id": 1 00:18:37.300 } 00:18:37.300 Got JSON-RPC error response 00:18:37.300 response: 00:18:37.300 { 00:18:37.300 "code": -5, 00:18:37.300 "message": "Input/output error" 00:18:37.300 } 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3047615 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3047615 ']' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3047615 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3047615 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3047615' 00:18:37.300 killing process with pid 3047615 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3047615 00:18:37.300 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.300 00:18:37.300 Latency(us) 00:18:37.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.300 =================================================================================================================== 00:18:37.300 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.300 [2024-07-15 11:44:44.879807] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:37.300 11:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3047615 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sIinpP6axv 00:18:37.300 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sIinpP6axv 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sIinpP6axv 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sIinpP6axv' 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3047756 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3047756 /var/tmp/bdevperf.sock 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3047756 ']' 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.301 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.301 [2024-07-15 11:44:45.190949] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:37.301 [2024-07-15 11:44:45.191038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047756 ] 00:18:37.301 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.301 [2024-07-15 11:44:45.249528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.559 [2024-07-15 11:44:45.356441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.559 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.559 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:37.559 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sIinpP6axv 00:18:37.817 [2024-07-15 11:44:45.719448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.817 [2024-07-15 11:44:45.719551] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:37.817 [2024-07-15 11:44:45.730949] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:37.817 [2024-07-15 11:44:45.730980] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:37.817 [2024-07-15 11:44:45.731032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.817 [2024-07-15 11:44:45.731402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23906d0 (107): Transport endpoint is not connected 00:18:37.817 [2024-07-15 11:44:45.732393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23906d0 (9): Bad file descriptor 00:18:37.817 [2024-07-15 11:44:45.733393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:37.817 [2024-07-15 11:44:45.733412] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.817 [2024-07-15 11:44:45.733431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.817 request: 00:18:37.817 { 00:18:37.817 "name": "TLSTEST", 00:18:37.817 "trtype": "tcp", 00:18:37.817 "traddr": "10.0.0.2", 00:18:37.817 "adrfam": "ipv4", 00:18:37.817 "trsvcid": "4420", 00:18:37.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.817 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:37.817 "prchk_reftag": false, 00:18:37.817 "prchk_guard": false, 00:18:37.817 "hdgst": false, 00:18:37.817 "ddgst": false, 00:18:37.817 "psk": "/tmp/tmp.sIinpP6axv", 00:18:37.817 "method": "bdev_nvme_attach_controller", 00:18:37.817 "req_id": 1 00:18:37.817 } 00:18:37.817 Got JSON-RPC error response 00:18:37.817 response: 00:18:37.817 { 00:18:37.817 "code": -5, 00:18:37.817 "message": "Input/output error" 00:18:37.817 } 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3047756 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3047756 ']' 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3047756 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3047756 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3047756' 00:18:37.817 killing process with pid 3047756 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3047756 00:18:37.817 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.817 00:18:37.817 Latency(us) 00:18:37.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.817 =================================================================================================================== 00:18:37.817 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.817 [2024-07-15 11:44:45.779694] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:37.817 11:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3047756 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sIinpP6axv 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sIinpP6axv 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sIinpP6axv 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sIinpP6axv' 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3047892 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3047892 /var/tmp/bdevperf.sock 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3047892 ']' 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.075 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.333 [2024-07-15 11:44:46.086067] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:38.333 [2024-07-15 11:44:46.086158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047892 ] 00:18:38.333 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.333 [2024-07-15 11:44:46.143530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.333 [2024-07-15 11:44:46.246618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.590 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.590 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.590 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sIinpP6axv 00:18:38.848 [2024-07-15 11:44:46.628617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.848 [2024-07-15 11:44:46.628758] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:38.848 [2024-07-15 11:44:46.633746] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.848 [2024-07-15 11:44:46.633791] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:38.848 [2024-07-15 11:44:46.633832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:38.848 [2024-07-15 11:44:46.634451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20776d0 (107): Transport endpoint is not connected 00:18:38.848 [2024-07-15 11:44:46.635439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20776d0 (9): Bad file descriptor 00:18:38.848 [2024-07-15 11:44:46.636438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:38.848 [2024-07-15 11:44:46.636459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:38.848 [2024-07-15 11:44:46.636478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:38.848 request: 00:18:38.848 { 00:18:38.848 "name": "TLSTEST", 00:18:38.848 "trtype": "tcp", 00:18:38.848 "traddr": "10.0.0.2", 00:18:38.848 "adrfam": "ipv4", 00:18:38.848 "trsvcid": "4420", 00:18:38.848 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:38.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.848 "prchk_reftag": false, 00:18:38.848 "prchk_guard": false, 00:18:38.848 "hdgst": false, 00:18:38.848 "ddgst": false, 00:18:38.848 "psk": "/tmp/tmp.sIinpP6axv", 00:18:38.848 "method": "bdev_nvme_attach_controller", 00:18:38.848 "req_id": 1 00:18:38.848 } 00:18:38.848 Got JSON-RPC error response 00:18:38.848 response: 00:18:38.848 { 00:18:38.848 "code": -5, 00:18:38.848 "message": "Input/output error" 00:18:38.848 } 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3047892 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3047892 ']' 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3047892 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.848 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3047892 00:18:38.849 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:38.849 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:38.849 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3047892' 00:18:38.849 killing process with pid 3047892 00:18:38.849 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3047892 00:18:38.849 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.849 00:18:38.849 Latency(us) 00:18:38.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.849 =================================================================================================================== 00:18:38.849 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.849 [2024-07-15 11:44:46.688449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:38.849 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3047892 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3048032 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3048032 /var/tmp/bdevperf.sock 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3048032 ']' 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.106 11:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.106 [2024-07-15 11:44:46.986926] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:39.106 [2024-07-15 11:44:46.987019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048032 ] 00:18:39.106 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.106 [2024-07-15 11:44:47.045262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.364 [2024-07-15 11:44:47.150225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.364 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.364 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:39.364 11:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:39.621 [2024-07-15 11:44:47.535647] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:39.621 [2024-07-15 11:44:47.537397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a0e10 (9): Bad file descriptor 00:18:39.621 [2024-07-15 11:44:47.538394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.621 [2024-07-15 11:44:47.538416] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:39.621 [2024-07-15 11:44:47.538434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.621 request: 00:18:39.621 { 00:18:39.621 "name": "TLSTEST", 00:18:39.621 "trtype": "tcp", 00:18:39.621 "traddr": "10.0.0.2", 00:18:39.621 "adrfam": "ipv4", 00:18:39.621 "trsvcid": "4420", 00:18:39.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.621 "prchk_reftag": false, 00:18:39.621 "prchk_guard": false, 00:18:39.621 "hdgst": false, 00:18:39.621 "ddgst": false, 00:18:39.621 "method": "bdev_nvme_attach_controller", 00:18:39.621 "req_id": 1 00:18:39.621 } 00:18:39.621 Got JSON-RPC error response 00:18:39.621 response: 00:18:39.621 { 00:18:39.621 "code": -5, 00:18:39.621 "message": "Input/output error" 00:18:39.621 } 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3048032 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3048032 ']' 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3048032 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3048032 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3048032' 00:18:39.621 killing process with pid 3048032 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3048032 00:18:39.621 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.621 00:18:39.621 Latency(us) 00:18:39.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.621 =================================================================================================================== 00:18:39.621 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.621 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3048032 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3044529 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3044529 ']' 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3044529 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.879 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3044529 00:18:40.136 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.136 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.136 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3044529' 00:18:40.136 killing process with pid 3044529 00:18:40.136 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3044529 00:18:40.136 [2024-07-15 11:44:47.873026] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.137 11:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3044529 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.WbnBCwb0Vw 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.WbnBCwb0Vw 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3048185 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:40.394 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3048185 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3048185 ']' 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.395 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.395 [2024-07-15 11:44:48.253330] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:40.395 [2024-07-15 11:44:48.253422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.395 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.395 [2024-07-15 11:44:48.315791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.652 [2024-07-15 11:44:48.421589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.652 [2024-07-15 11:44:48.421642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.652 [2024-07-15 11:44:48.421665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.652 [2024-07-15 11:44:48.421677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.652 [2024-07-15 11:44:48.421687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.652 [2024-07-15 11:44:48.421712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WbnBCwb0Vw 00:18:40.652 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.909 [2024-07-15 11:44:48.842317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.909 11:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:41.167 11:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.425 [2024-07-15 11:44:49.355667] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.425 [2024-07-15 11:44:49.355921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.425 11:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:41.682 malloc0 00:18:41.682 11:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:41.939 11:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:42.197 [2024-07-15 11:44:50.103839] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WbnBCwb0Vw 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WbnBCwb0Vw' 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3048347 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3048347 /var/tmp/bdevperf.sock 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3048347 ']' 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.197 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.197 [2024-07-15 11:44:50.164762] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:42.197 [2024-07-15 11:44:50.164839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048347 ] 00:18:42.455 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.455 [2024-07-15 11:44:50.231891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.455 [2024-07-15 11:44:50.343225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.712 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.712 11:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:42.712 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:42.712 [2024-07-15 11:44:50.676998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.712 [2024-07-15 11:44:50.677136] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.970 TLSTESTn1 00:18:42.970 11:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:42.970 Running I/O for 10 seconds... 00:18:52.932 00:18:52.932 Latency(us) 00:18:52.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.932 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.932 Verification LBA range: start 0x0 length 0x2000 00:18:52.932 TLSTESTn1 : 10.02 3545.86 13.85 0.00 0.00 36040.45 5606.97 50098.63 00:18:52.932 =================================================================================================================== 00:18:52.932 Total : 3545.86 13.85 0.00 0.00 36040.45 5606.97 50098.63 00:18:52.932 0 00:18:53.189 11:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.189 11:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3048347 00:18:53.189 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3048347 ']' 00:18:53.189 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3048347 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3048347 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3048347' 00:18:53.190 killing process with pid 3048347 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3048347 00:18:53.190 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.190 00:18:53.190 Latency(us) 00:18:53.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.190 =================================================================================================================== 00:18:53.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.190 [2024-07-15 11:45:00.954405] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.190 11:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3048347 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.WbnBCwb0Vw 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WbnBCwb0Vw 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WbnBCwb0Vw 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WbnBCwb0Vw 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WbnBCwb0Vw' 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3049731 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3049731 /var/tmp/bdevperf.sock 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3049731 ']' 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.448 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.448 [2024-07-15 11:45:01.257853] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:53.448 [2024-07-15 11:45:01.257947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049731 ] 00:18:53.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.448 [2024-07-15 11:45:01.326927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.706 [2024-07-15 11:45:01.439072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.706 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.706 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.706 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:53.964 [2024-07-15 11:45:01.773223] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.964 [2024-07-15 11:45:01.773303] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:53.964 [2024-07-15 11:45:01.773317] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.WbnBCwb0Vw 00:18:53.964 request: 00:18:53.964 { 00:18:53.964 "name": "TLSTEST", 00:18:53.964 "trtype": "tcp", 00:18:53.964 "traddr": "10.0.0.2", 00:18:53.964 "adrfam": "ipv4", 00:18:53.964 "trsvcid": "4420", 00:18:53.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.964 "prchk_reftag": false, 00:18:53.964 "prchk_guard": false, 00:18:53.964 "hdgst": false, 00:18:53.964 "ddgst": false, 00:18:53.964 "psk": "/tmp/tmp.WbnBCwb0Vw", 00:18:53.964 "method": "bdev_nvme_attach_controller", 00:18:53.964 "req_id": 1 00:18:53.964 } 00:18:53.964 Got JSON-RPC error response 00:18:53.964 response: 00:18:53.964 { 00:18:53.964 "code": -1, 00:18:53.964 "message": "Operation not permitted" 00:18:53.964 } 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3049731 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3049731 ']' 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3049731 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3049731 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3049731' 00:18:53.964 killing process with pid 3049731 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3049731 00:18:53.964 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.964 00:18:53.964 Latency(us) 00:18:53.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.964 =================================================================================================================== 00:18:53.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.964 11:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3049731 00:18:54.222 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:54.222 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:54.222 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3048185 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3048185 ']' 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3048185 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3048185 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3048185' 00:18:54.223 killing process with pid 3048185 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3048185 00:18:54.223 [2024-07-15 11:45:02.082183] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:54.223 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3048185 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3049924 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3049924 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3049924 ']' 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.480 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.480 [2024-07-15 11:45:02.422077] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:54.480 [2024-07-15 11:45:02.422165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.480 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.739 [2024-07-15 11:45:02.493433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.739 [2024-07-15 11:45:02.603430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.739 [2024-07-15 11:45:02.603489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.739 [2024-07-15 11:45:02.603501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.739 [2024-07-15 11:45:02.603512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.739 [2024-07-15 11:45:02.603521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.739 [2024-07-15 11:45:02.603562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.739 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.739 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:54.739 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.739 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.739 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WbnBCwb0Vw 00:18:54.997 11:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.255 [2024-07-15 11:45:02.989618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.255 11:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.512 11:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.769 [2024-07-15 11:45:03.535146] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.769 [2024-07-15 11:45:03.535388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.769 11:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:56.026 malloc0 00:18:56.026 11:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.285 11:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:56.573 [2024-07-15 11:45:04.413462] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:56.573 [2024-07-15 11:45:04.413502] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:56.573 [2024-07-15 11:45:04.413532] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:56.573 request: 00:18:56.573 { 00:18:56.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.573 "host": "nqn.2016-06.io.spdk:host1", 00:18:56.573 "psk": "/tmp/tmp.WbnBCwb0Vw", 00:18:56.573 "method": "nvmf_subsystem_add_host", 00:18:56.573 "req_id": 1 00:18:56.573 } 00:18:56.573 Got JSON-RPC error response 00:18:56.573 response: 00:18:56.573 { 00:18:56.573 "code": -32603, 00:18:56.573 "message": "Internal error" 00:18:56.573 } 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3049924 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3049924 ']' 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3049924 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3049924 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3049924' 00:18:56.573 killing process with pid 3049924 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3049924 00:18:56.573 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3049924 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.WbnBCwb0Vw 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 11:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3050220 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3050220 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3050220 ']' 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.833 11:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.833 [2024-07-15 11:45:04.789385] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:56.833 [2024-07-15 11:45:04.789474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.090 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.090 [2024-07-15 11:45:04.856549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.090 [2024-07-15 11:45:04.966436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.090 [2024-07-15 11:45:04.966495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.090 [2024-07-15 11:45:04.966519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.090 [2024-07-15 11:45:04.966530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.090 [2024-07-15 11:45:04.966539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.090 [2024-07-15 11:45:04.966565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.090 11:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.090 11:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:57.090 11:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.090 11:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.090 11:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.348 11:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.348 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:18:57.348 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WbnBCwb0Vw 00:18:57.348 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.348 [2024-07-15 11:45:05.318316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.607 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.607 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.865 [2024-07-15 11:45:05.815565] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.865 [2024-07-15 11:45:05.815804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.865 11:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.122 malloc0 00:18:58.122 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.380 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:58.638 [2024-07-15 11:45:06.569147] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3050504 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3050504 /var/tmp/bdevperf.sock 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3050504 ']' 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.638 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.896 [2024-07-15 11:45:06.633165] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:58.896 [2024-07-15 11:45:06.633256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050504 ] 00:18:58.896 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.896 [2024-07-15 11:45:06.693345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.896 [2024-07-15 11:45:06.802926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.154 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.154 11:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.154 11:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:18:59.410 [2024-07-15 11:45:07.160866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.410 [2024-07-15 11:45:07.160979] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.410 TLSTESTn1 00:18:59.410 11:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:59.669 11:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:59.669 "subsystems": [ 00:18:59.669 { 00:18:59.669 "subsystem": "keyring", 00:18:59.669 "config": [] 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "subsystem": "iobuf", 00:18:59.669 "config": [ 00:18:59.669 { 00:18:59.669 "method": "iobuf_set_options", 00:18:59.669 "params": { 00:18:59.669 "small_pool_count": 8192, 00:18:59.669 "large_pool_count": 1024, 00:18:59.669 "small_bufsize": 8192, 00:18:59.669 "large_bufsize": 135168 00:18:59.669 } 00:18:59.669 } 00:18:59.669 ] 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "subsystem": "sock", 00:18:59.669 "config": [ 00:18:59.669 { 00:18:59.669 "method": "sock_set_default_impl", 00:18:59.669 "params": { 00:18:59.669 "impl_name": "posix" 00:18:59.669 } 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "method": "sock_impl_set_options", 00:18:59.669 "params": { 00:18:59.669 "impl_name": "ssl", 00:18:59.669 "recv_buf_size": 4096, 00:18:59.669 "send_buf_size": 4096, 00:18:59.669 "enable_recv_pipe": true, 00:18:59.669 "enable_quickack": false, 00:18:59.669 "enable_placement_id": 0, 00:18:59.669 "enable_zerocopy_send_server": true, 00:18:59.669 "enable_zerocopy_send_client": false, 00:18:59.669 "zerocopy_threshold": 0, 00:18:59.669 "tls_version": 0, 00:18:59.669 "enable_ktls": false 00:18:59.669 } 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "method": "sock_impl_set_options", 00:18:59.669 "params": { 00:18:59.669 "impl_name": "posix", 00:18:59.669 "recv_buf_size": 2097152, 00:18:59.669 "send_buf_size": 2097152, 00:18:59.669 "enable_recv_pipe": true, 00:18:59.669 "enable_quickack": false, 00:18:59.669 "enable_placement_id": 0, 00:18:59.669 "enable_zerocopy_send_server": true, 00:18:59.669 "enable_zerocopy_send_client": false, 00:18:59.669 "zerocopy_threshold": 0, 00:18:59.669 "tls_version": 0, 00:18:59.669 "enable_ktls": false 00:18:59.669 } 00:18:59.669 } 00:18:59.669 ] 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "subsystem": "vmd", 00:18:59.669 "config": [] 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "subsystem": "accel", 00:18:59.669 "config": [ 00:18:59.669 { 00:18:59.669 "method": "accel_set_options", 00:18:59.669 "params": { 00:18:59.669 "small_cache_size": 128, 00:18:59.669 "large_cache_size": 16, 00:18:59.669 "task_count": 2048, 00:18:59.669 "sequence_count": 2048, 00:18:59.669 "buf_count": 2048 00:18:59.669 } 00:18:59.669 } 00:18:59.669 ] 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "subsystem": "bdev", 00:18:59.669 "config": [ 00:18:59.669 { 00:18:59.669 "method": "bdev_set_options", 00:18:59.669 "params": { 00:18:59.669 "bdev_io_pool_size": 65535, 00:18:59.669 "bdev_io_cache_size": 256, 00:18:59.669 "bdev_auto_examine": true, 00:18:59.669 "iobuf_small_cache_size": 128, 00:18:59.669 "iobuf_large_cache_size": 16 00:18:59.669 } 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "method": "bdev_raid_set_options", 00:18:59.669 "params": { 00:18:59.669 "process_window_size_kb": 1024 00:18:59.669 } 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "method": "bdev_iscsi_set_options", 00:18:59.669 "params": { 00:18:59.669 "timeout_sec": 30 00:18:59.669 } 00:18:59.669 }, 00:18:59.669 { 00:18:59.669 "method": "bdev_nvme_set_options", 00:18:59.669 "params": { 00:18:59.669 "action_on_timeout": "none", 00:18:59.669 "timeout_us": 0, 00:18:59.669 "timeout_admin_us": 0, 00:18:59.669 "keep_alive_timeout_ms": 10000, 00:18:59.669 "arbitration_burst": 0, 00:18:59.669 "low_priority_weight": 0, 00:18:59.669 "medium_priority_weight": 0, 00:18:59.669 "high_priority_weight": 0, 00:18:59.669 "nvme_adminq_poll_period_us": 10000, 00:18:59.669 "nvme_ioq_poll_period_us": 0, 00:18:59.669 "io_queue_requests": 0, 00:18:59.669 "delay_cmd_submit": true, 00:18:59.669 "transport_retry_count": 4, 00:18:59.669 "bdev_retry_count": 3, 00:18:59.669 "transport_ack_timeout": 0, 00:18:59.669 "ctrlr_loss_timeout_sec": 0, 00:18:59.669 "reconnect_delay_sec": 0, 00:18:59.669 "fast_io_fail_timeout_sec": 0, 00:18:59.669 "disable_auto_failback": false, 00:18:59.669 "generate_uuids": false, 00:18:59.669 "transport_tos": 0, 00:18:59.669 "nvme_error_stat": false, 00:18:59.670 "rdma_srq_size": 0, 00:18:59.670 "io_path_stat": false, 00:18:59.670 "allow_accel_sequence": false, 00:18:59.670 "rdma_max_cq_size": 0, 00:18:59.670 "rdma_cm_event_timeout_ms": 0, 00:18:59.670 "dhchap_digests": [ 00:18:59.670 "sha256", 00:18:59.670 "sha384", 00:18:59.670 "sha512" 00:18:59.670 ], 00:18:59.670 "dhchap_dhgroups": [ 00:18:59.670 "null", 00:18:59.670 "ffdhe2048", 00:18:59.670 "ffdhe3072", 00:18:59.670 "ffdhe4096", 00:18:59.670 "ffdhe6144", 00:18:59.670 "ffdhe8192" 00:18:59.670 ] 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "bdev_nvme_set_hotplug", 00:18:59.670 "params": { 00:18:59.670 "period_us": 100000, 00:18:59.670 "enable": false 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "bdev_malloc_create", 00:18:59.670 "params": { 00:18:59.670 "name": "malloc0", 00:18:59.670 "num_blocks": 8192, 00:18:59.670 "block_size": 4096, 00:18:59.670 "physical_block_size": 4096, 00:18:59.670 "uuid": "af684049-163f-4254-bf55-8abaaefdb3dc", 00:18:59.670 "optimal_io_boundary": 0 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "bdev_wait_for_examine" 00:18:59.670 } 00:18:59.670 ] 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "subsystem": "nbd", 00:18:59.670 "config": [] 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "subsystem": "scheduler", 00:18:59.670 "config": [ 00:18:59.670 { 00:18:59.670 "method": "framework_set_scheduler", 00:18:59.670 "params": { 00:18:59.670 "name": "static" 00:18:59.670 } 00:18:59.670 } 00:18:59.670 ] 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "subsystem": "nvmf", 00:18:59.670 "config": [ 00:18:59.670 { 00:18:59.670 "method": "nvmf_set_config", 00:18:59.670 "params": { 00:18:59.670 "discovery_filter": "match_any", 00:18:59.670 "admin_cmd_passthru": { 00:18:59.670 "identify_ctrlr": false 00:18:59.670 } 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_set_max_subsystems", 00:18:59.670 "params": { 00:18:59.670 "max_subsystems": 1024 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_set_crdt", 00:18:59.670 "params": { 00:18:59.670 "crdt1": 0, 00:18:59.670 "crdt2": 0, 00:18:59.670 "crdt3": 0 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_create_transport", 00:18:59.670 "params": { 00:18:59.670 "trtype": "TCP", 00:18:59.670 "max_queue_depth": 128, 00:18:59.670 "max_io_qpairs_per_ctrlr": 127, 00:18:59.670 "in_capsule_data_size": 4096, 00:18:59.670 "max_io_size": 131072, 00:18:59.670 "io_unit_size": 131072, 00:18:59.670 "max_aq_depth": 128, 00:18:59.670 "num_shared_buffers": 511, 00:18:59.670 "buf_cache_size": 4294967295, 00:18:59.670 "dif_insert_or_strip": false, 00:18:59.670 "zcopy": false, 00:18:59.670 "c2h_success": false, 00:18:59.670 "sock_priority": 0, 00:18:59.670 "abort_timeout_sec": 1, 00:18:59.670 "ack_timeout": 0, 00:18:59.670 "data_wr_pool_size": 0 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_create_subsystem", 00:18:59.670 "params": { 00:18:59.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.670 "allow_any_host": false, 00:18:59.670 "serial_number": "SPDK00000000000001", 00:18:59.670 "model_number": "SPDK bdev Controller", 00:18:59.670 "max_namespaces": 10, 00:18:59.670 "min_cntlid": 1, 00:18:59.670 "max_cntlid": 65519, 00:18:59.670 "ana_reporting": false 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_subsystem_add_host", 00:18:59.670 "params": { 00:18:59.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.670 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.670 "psk": "/tmp/tmp.WbnBCwb0Vw" 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_subsystem_add_ns", 00:18:59.670 "params": { 00:18:59.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.670 "namespace": { 00:18:59.670 "nsid": 1, 00:18:59.670 "bdev_name": "malloc0", 00:18:59.670 "nguid": "AF684049163F4254BF558ABAAEFDB3DC", 00:18:59.670 "uuid": "af684049-163f-4254-bf55-8abaaefdb3dc", 00:18:59.670 "no_auto_visible": false 00:18:59.670 } 00:18:59.670 } 00:18:59.670 }, 00:18:59.670 { 00:18:59.670 "method": "nvmf_subsystem_add_listener", 00:18:59.670 "params": { 00:18:59.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.670 "listen_address": { 00:18:59.670 "trtype": "TCP", 00:18:59.670 "adrfam": "IPv4", 00:18:59.670 "traddr": "10.0.0.2", 00:18:59.670 "trsvcid": "4420" 00:18:59.670 }, 00:18:59.670 "secure_channel": true 00:18:59.670 } 00:18:59.670 } 00:18:59.670 ] 00:18:59.670 } 00:18:59.670 ] 00:18:59.670 }' 00:18:59.670 11:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:00.236 11:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:00.236 "subsystems": [ 00:19:00.236 { 00:19:00.236 "subsystem": "keyring", 00:19:00.236 "config": [] 00:19:00.236 }, 00:19:00.236 { 00:19:00.236 "subsystem": "iobuf", 00:19:00.236 "config": [ 00:19:00.236 { 00:19:00.236 "method": "iobuf_set_options", 00:19:00.236 "params": { 00:19:00.236 "small_pool_count": 8192, 00:19:00.236 "large_pool_count": 1024, 00:19:00.236 "small_bufsize": 8192, 00:19:00.236 "large_bufsize": 135168 00:19:00.237 } 00:19:00.237 } 00:19:00.237 ] 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "subsystem": "sock", 00:19:00.237 "config": [ 00:19:00.237 { 00:19:00.237 "method": "sock_set_default_impl", 00:19:00.237 "params": { 00:19:00.237 "impl_name": "posix" 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "sock_impl_set_options", 00:19:00.237 "params": { 00:19:00.237 "impl_name": "ssl", 00:19:00.237 "recv_buf_size": 4096, 00:19:00.237 "send_buf_size": 4096, 00:19:00.237 "enable_recv_pipe": true, 00:19:00.237 "enable_quickack": false, 00:19:00.237 "enable_placement_id": 0, 00:19:00.237 "enable_zerocopy_send_server": true, 00:19:00.237 "enable_zerocopy_send_client": false, 00:19:00.237 "zerocopy_threshold": 0, 00:19:00.237 "tls_version": 0, 00:19:00.237 "enable_ktls": false 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "sock_impl_set_options", 00:19:00.237 "params": { 00:19:00.237 "impl_name": "posix", 00:19:00.237 "recv_buf_size": 2097152, 00:19:00.237 "send_buf_size": 2097152, 00:19:00.237 "enable_recv_pipe": true, 00:19:00.237 "enable_quickack": false, 00:19:00.237 "enable_placement_id": 0, 00:19:00.237 "enable_zerocopy_send_server": true, 00:19:00.237 "enable_zerocopy_send_client": false, 00:19:00.237 "zerocopy_threshold": 0, 00:19:00.237 "tls_version": 0, 00:19:00.237 "enable_ktls": false 00:19:00.237 } 00:19:00.237 } 00:19:00.237 ] 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "subsystem": "vmd", 00:19:00.237 "config": [] 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "subsystem": "accel", 00:19:00.237 "config": [ 00:19:00.237 { 00:19:00.237 "method": "accel_set_options", 00:19:00.237 "params": { 00:19:00.237 "small_cache_size": 128, 00:19:00.237 "large_cache_size": 16, 00:19:00.237 "task_count": 2048, 00:19:00.237 "sequence_count": 2048, 00:19:00.237 "buf_count": 2048 00:19:00.237 } 00:19:00.237 } 00:19:00.237 ] 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "subsystem": "bdev", 00:19:00.237 "config": [ 00:19:00.237 { 00:19:00.237 "method": "bdev_set_options", 00:19:00.237 "params": { 00:19:00.237 "bdev_io_pool_size": 65535, 00:19:00.237 "bdev_io_cache_size": 256, 00:19:00.237 "bdev_auto_examine": true, 00:19:00.237 "iobuf_small_cache_size": 128, 00:19:00.237 "iobuf_large_cache_size": 16 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_raid_set_options", 00:19:00.237 "params": { 00:19:00.237 "process_window_size_kb": 1024 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_iscsi_set_options", 00:19:00.237 "params": { 00:19:00.237 "timeout_sec": 30 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_nvme_set_options", 00:19:00.237 "params": { 00:19:00.237 "action_on_timeout": "none", 00:19:00.237 "timeout_us": 0, 00:19:00.237 "timeout_admin_us": 0, 00:19:00.237 "keep_alive_timeout_ms": 10000, 00:19:00.237 "arbitration_burst": 0, 00:19:00.237 "low_priority_weight": 0, 00:19:00.237 "medium_priority_weight": 0, 00:19:00.237 "high_priority_weight": 0, 00:19:00.237 "nvme_adminq_poll_period_us": 10000, 00:19:00.237 "nvme_ioq_poll_period_us": 0, 00:19:00.237 "io_queue_requests": 512, 00:19:00.237 "delay_cmd_submit": true, 00:19:00.237 "transport_retry_count": 4, 00:19:00.237 "bdev_retry_count": 3, 00:19:00.237 "transport_ack_timeout": 0, 00:19:00.237 "ctrlr_loss_timeout_sec": 0, 00:19:00.237 "reconnect_delay_sec": 0, 00:19:00.237 "fast_io_fail_timeout_sec": 0, 00:19:00.237 "disable_auto_failback": false, 00:19:00.237 "generate_uuids": false, 00:19:00.237 "transport_tos": 0, 00:19:00.237 "nvme_error_stat": false, 00:19:00.237 "rdma_srq_size": 0, 00:19:00.237 "io_path_stat": false, 00:19:00.237 "allow_accel_sequence": false, 00:19:00.237 "rdma_max_cq_size": 0, 00:19:00.237 "rdma_cm_event_timeout_ms": 0, 00:19:00.237 "dhchap_digests": [ 00:19:00.237 "sha256", 00:19:00.237 "sha384", 00:19:00.237 "sha512" 00:19:00.237 ], 00:19:00.237 "dhchap_dhgroups": [ 00:19:00.237 "null", 00:19:00.237 "ffdhe2048", 00:19:00.237 "ffdhe3072", 00:19:00.237 "ffdhe4096", 00:19:00.237 "ffdhe6144", 00:19:00.237 "ffdhe8192" 00:19:00.237 ] 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_nvme_attach_controller", 00:19:00.237 "params": { 00:19:00.237 "name": "TLSTEST", 00:19:00.237 "trtype": "TCP", 00:19:00.237 "adrfam": "IPv4", 00:19:00.237 "traddr": "10.0.0.2", 00:19:00.237 "trsvcid": "4420", 00:19:00.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.237 "prchk_reftag": false, 00:19:00.237 "prchk_guard": false, 00:19:00.237 "ctrlr_loss_timeout_sec": 0, 00:19:00.237 "reconnect_delay_sec": 0, 00:19:00.237 "fast_io_fail_timeout_sec": 0, 00:19:00.237 "psk": "/tmp/tmp.WbnBCwb0Vw", 00:19:00.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.237 "hdgst": false, 00:19:00.237 "ddgst": false 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_nvme_set_hotplug", 00:19:00.237 "params": { 00:19:00.237 "period_us": 100000, 00:19:00.237 "enable": false 00:19:00.237 } 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "method": "bdev_wait_for_examine" 00:19:00.237 } 00:19:00.237 ] 00:19:00.237 }, 00:19:00.237 { 00:19:00.237 "subsystem": "nbd", 00:19:00.237 "config": [] 00:19:00.237 } 00:19:00.237 ] 00:19:00.237 }' 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3050504 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3050504 ']' 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3050504 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3050504 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3050504' 00:19:00.237 killing process with pid 3050504 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3050504 00:19:00.237 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.237 00:19:00.237 Latency(us) 00:19:00.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.237 =================================================================================================================== 00:19:00.237 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.237 [2024-07-15 11:45:07.951008] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.237 11:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3050504 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3050220 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3050220 ']' 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3050220 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:00.237 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3050220 00:19:00.495 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:00.495 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:00.495 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3050220' 00:19:00.495 killing process with pid 3050220 00:19:00.495 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3050220 00:19:00.495 [2024-07-15 11:45:08.238816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:00.495 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3050220 00:19:00.754 11:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:00.754 11:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.754 11:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:00.754 "subsystems": [ 00:19:00.754 { 00:19:00.754 "subsystem": "keyring", 00:19:00.754 "config": [] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "iobuf", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "iobuf_set_options", 00:19:00.754 "params": { 00:19:00.754 "small_pool_count": 8192, 00:19:00.754 "large_pool_count": 1024, 00:19:00.754 "small_bufsize": 8192, 00:19:00.754 "large_bufsize": 135168 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "sock", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "sock_set_default_impl", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "posix" 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "sock_impl_set_options", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "ssl", 00:19:00.754 "recv_buf_size": 4096, 00:19:00.754 "send_buf_size": 4096, 00:19:00.754 "enable_recv_pipe": true, 00:19:00.754 "enable_quickack": false, 00:19:00.754 "enable_placement_id": 0, 00:19:00.754 "enable_zerocopy_send_server": true, 00:19:00.754 "enable_zerocopy_send_client": false, 00:19:00.754 "zerocopy_threshold": 0, 00:19:00.754 "tls_version": 0, 00:19:00.754 "enable_ktls": false 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "sock_impl_set_options", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "posix", 00:19:00.754 "recv_buf_size": 2097152, 00:19:00.754 "send_buf_size": 2097152, 00:19:00.754 "enable_recv_pipe": true, 00:19:00.754 "enable_quickack": false, 00:19:00.754 "enable_placement_id": 0, 00:19:00.754 "enable_zerocopy_send_server": true, 00:19:00.754 "enable_zerocopy_send_client": false, 00:19:00.754 "zerocopy_threshold": 0, 00:19:00.754 "tls_version": 0, 00:19:00.754 "enable_ktls": false 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "vmd", 00:19:00.754 "config": [] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "accel", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "accel_set_options", 00:19:00.754 "params": { 00:19:00.754 "small_cache_size": 128, 00:19:00.754 "large_cache_size": 16, 00:19:00.754 "task_count": 2048, 00:19:00.754 "sequence_count": 2048, 00:19:00.754 "buf_count": 2048 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "bdev", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "bdev_set_options", 00:19:00.754 "params": { 00:19:00.754 "bdev_io_pool_size": 65535, 00:19:00.754 "bdev_io_cache_size": 256, 00:19:00.754 "bdev_auto_examine": true, 00:19:00.754 "iobuf_small_cache_size": 128, 00:19:00.754 "iobuf_large_cache_size": 16 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_raid_set_options", 00:19:00.754 "params": { 00:19:00.754 "process_window_size_kb": 1024 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_iscsi_set_options", 00:19:00.754 "params": { 00:19:00.754 "timeout_sec": 30 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_nvme_set_options", 00:19:00.754 "params": { 00:19:00.754 "action_on_timeout": "none", 00:19:00.754 "timeout_us": 0, 00:19:00.754 "timeout_admin_us": 0, 00:19:00.755 "keep_alive_timeout_ms": 10000, 00:19:00.755 "arbitration_burst": 0, 00:19:00.755 "low_priority_weight": 0, 00:19:00.755 "medium_priority_weight": 0, 00:19:00.755 "high_priority_weight": 0, 00:19:00.755 "nvme_adminq_poll_period_us": 10000, 00:19:00.755 "nvme_ioq_poll_period_us": 0, 00:19:00.755 "io_queue_requests": 0, 00:19:00.755 "delay_cmd_submit": true, 00:19:00.755 "transport_retry_count": 4, 00:19:00.755 "bdev_retry_count": 3, 00:19:00.755 "transport_ack_timeout": 0, 00:19:00.755 "ctrlr_loss_timeout_sec": 0, 00:19:00.755 "reconnect_delay_sec": 0, 00:19:00.755 "fast_io_fail_timeout_sec": 0, 00:19:00.755 "disable_auto_failback": false, 00:19:00.755 "generate_uuids": false, 00:19:00.755 "transport_tos": 0, 00:19:00.755 "nvme_error_stat": false, 00:19:00.755 "rdma_srq_size": 0, 00:19:00.755 "io_path_stat": false, 00:19:00.755 "allow_accel_sequence": false, 00:19:00.755 "rdma_max_cq_size": 0, 00:19:00.755 "rdma_cm_event_timeout_ms": 0, 00:19:00.755 "dhchap_digests": [ 00:19:00.755 "sha256", 00:19:00.755 "sha384", 00:19:00.755 "sha512" 00:19:00.755 ], 00:19:00.755 "dhchap_dhgroups": [ 00:19:00.755 "null", 00:19:00.755 "ffdhe2048", 00:19:00.755 "ffdhe3072", 00:19:00.755 "ffdhe4096", 00:19:00.755 "ffdhe6144", 00:19:00.755 "ffdhe8192" 00:19:00.755 ] 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "bdev_nvme_set_hotplug", 00:19:00.755 "params": { 00:19:00.755 "period_us": 100000, 00:19:00.755 "enable": false 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "bdev_malloc_create", 00:19:00.755 "params": { 00:19:00.755 "name": "malloc0", 00:19:00.755 "num_blocks": 8192, 00:19:00.755 "block_size": 4096, 00:19:00.755 "physical_block_size": 4096, 00:19:00.755 "uuid": "af684049-163f-4254-bf55-8abaaefdb3dc", 00:19:00.755 "optimal_io_boundary": 0 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "bdev_wait_for_examine" 00:19:00.755 } 00:19:00.755 ] 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "subsystem": "nbd", 00:19:00.755 "config": [] 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "subsystem": "scheduler", 00:19:00.755 "config": [ 00:19:00.755 { 00:19:00.755 "method": "framework_set_scheduler", 00:19:00.755 "params": { 00:19:00.755 "name": "static" 00:19:00.755 } 00:19:00.755 } 00:19:00.755 ] 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "subsystem": "nvmf", 00:19:00.755 "config": [ 00:19:00.755 { 00:19:00.755 "method": "nvmf_set_config", 00:19:00.755 "params": { 00:19:00.755 "discovery_filter": "match_any", 00:19:00.755 "admin_cmd_passthru": { 00:19:00.755 "identify_ctrlr": false 00:19:00.755 } 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_set_max_subsystems", 00:19:00.755 "params": { 00:19:00.755 "max_subsystems": 1024 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_set_crdt", 00:19:00.755 "params": { 00:19:00.755 "crdt1": 0, 00:19:00.755 "crdt2": 0, 00:19:00.755 "crdt3": 0 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_create_transport", 00:19:00.755 "params": { 00:19:00.755 "trtype": "TCP", 00:19:00.755 "max_queue_depth": 128, 00:19:00.755 "max_io_qpairs_per_ctrlr": 127, 00:19:00.755 "in_capsule_data_size": 4096, 00:19:00.755 "max_io_size": 131072, 00:19:00.755 "io_unit_size": 131072, 00:19:00.755 "max_aq_depth": 128, 00:19:00.755 "num_shared_buffers": 511, 00:19:00.755 "buf_cache_size": 4294967295, 00:19:00.755 "dif_insert_or_strip": false, 00:19:00.755 "zcopy": false, 00:19:00.755 "c2h_success": false, 00:19:00.755 "sock_priority": 0, 00:19:00.755 "abort_timeout_sec": 1, 00:19:00.755 "ack_timeout": 0, 00:19:00.755 "data_wr_pool_size": 0 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_create_subsystem", 00:19:00.755 "params": { 00:19:00.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.755 "allow_any_host": false, 00:19:00.755 "serial_number": "SPDK00000000000001", 00:19:00.755 "model_number": "SPDK bdev Controller", 00:19:00.755 "max_namespaces": 10, 00:19:00.755 "min_cntlid": 1, 00:19:00.755 "max_cntlid": 65519, 00:19:00.755 "ana_reporting": false 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_subsystem_add_host", 00:19:00.755 "params": { 00:19:00.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.755 "host": "nqn.2016-06.io.spdk:host1", 00:19:00.755 "psk": "/tmp/tmp.WbnBCwb0Vw" 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_subsystem_add_ns", 00:19:00.755 "params": { 00:19:00.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.755 "namespace": { 00:19:00.755 "nsid": 1, 00:19:00.755 "bdev_name": "malloc0", 00:19:00.755 "nguid": "AF684049163F4254BF558ABAAEFDB3DC", 00:19:00.755 "uuid": "af684049-163f-4254-bf55-8abaaefdb3dc", 00:19:00.755 "no_auto_visible": false 00:19:00.755 } 00:19:00.755 } 00:19:00.755 }, 00:19:00.755 { 00:19:00.755 "method": "nvmf_subsystem_add_listener", 00:19:00.755 "params": { 00:19:00.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.755 "listen_address": { 00:19:00.755 "trtype": "TCP", 00:19:00.755 "adrfam": "IPv4", 00:19:00.755 "traddr": "10.0.0.2", 00:19:00.755 "trsvcid": "4420" 00:19:00.755 }, 00:19:00.755 "secure_channel": true 00:19:00.755 } 00:19:00.755 } 00:19:00.755 ] 00:19:00.755 } 00:19:00.755 ] 00:19:00.755 }' 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3051163 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3051163 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3051163 ']' 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.755 11:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.755 [2024-07-15 11:45:08.566037] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:00.755 [2024-07-15 11:45:08.566145] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.755 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.755 [2024-07-15 11:45:08.629884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.755 [2024-07-15 11:45:08.738171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.755 [2024-07-15 11:45:08.738233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.755 [2024-07-15 11:45:08.738271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.755 [2024-07-15 11:45:08.738283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.755 [2024-07-15 11:45:08.738293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.755 [2024-07-15 11:45:08.738399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.015 [2024-07-15 11:45:08.963141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.015 [2024-07-15 11:45:08.979101] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:01.015 [2024-07-15 11:45:08.995174] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.272 [2024-07-15 11:45:09.004949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3051432 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3051432 /var/tmp/bdevperf.sock 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3051432 ']' 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.839 11:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:01.839 "subsystems": [ 00:19:01.839 { 00:19:01.839 "subsystem": "keyring", 00:19:01.839 "config": [] 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "subsystem": "iobuf", 00:19:01.839 "config": [ 00:19:01.839 { 00:19:01.839 "method": "iobuf_set_options", 00:19:01.839 "params": { 00:19:01.839 "small_pool_count": 8192, 00:19:01.839 "large_pool_count": 1024, 00:19:01.839 "small_bufsize": 8192, 00:19:01.839 "large_bufsize": 135168 00:19:01.839 } 00:19:01.839 } 00:19:01.839 ] 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "subsystem": "sock", 00:19:01.839 "config": [ 00:19:01.839 { 00:19:01.839 "method": "sock_set_default_impl", 00:19:01.839 "params": { 00:19:01.839 "impl_name": "posix" 00:19:01.839 } 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "method": "sock_impl_set_options", 00:19:01.839 "params": { 00:19:01.839 "impl_name": "ssl", 00:19:01.839 "recv_buf_size": 4096, 00:19:01.839 "send_buf_size": 4096, 00:19:01.839 "enable_recv_pipe": true, 00:19:01.839 "enable_quickack": false, 00:19:01.839 "enable_placement_id": 0, 00:19:01.839 "enable_zerocopy_send_server": true, 00:19:01.839 "enable_zerocopy_send_client": false, 00:19:01.839 "zerocopy_threshold": 0, 00:19:01.839 "tls_version": 0, 00:19:01.839 "enable_ktls": false 00:19:01.839 } 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "method": "sock_impl_set_options", 00:19:01.839 "params": { 00:19:01.839 "impl_name": "posix", 00:19:01.839 "recv_buf_size": 2097152, 00:19:01.839 "send_buf_size": 2097152, 00:19:01.839 "enable_recv_pipe": true, 00:19:01.839 "enable_quickack": false, 00:19:01.839 "enable_placement_id": 0, 00:19:01.839 "enable_zerocopy_send_server": true, 00:19:01.839 "enable_zerocopy_send_client": false, 00:19:01.839 "zerocopy_threshold": 0, 00:19:01.839 "tls_version": 0, 00:19:01.839 "enable_ktls": false 00:19:01.839 } 00:19:01.839 } 00:19:01.839 ] 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "subsystem": "vmd", 00:19:01.839 "config": [] 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "subsystem": "accel", 00:19:01.839 "config": [ 00:19:01.839 { 00:19:01.839 "method": "accel_set_options", 00:19:01.839 "params": { 00:19:01.839 "small_cache_size": 128, 00:19:01.839 "large_cache_size": 16, 00:19:01.839 "task_count": 2048, 00:19:01.839 "sequence_count": 2048, 00:19:01.839 "buf_count": 2048 00:19:01.839 } 00:19:01.839 } 00:19:01.839 ] 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "subsystem": "bdev", 00:19:01.839 "config": [ 00:19:01.839 { 00:19:01.839 "method": "bdev_set_options", 00:19:01.839 "params": { 00:19:01.839 "bdev_io_pool_size": 65535, 00:19:01.839 "bdev_io_cache_size": 256, 00:19:01.839 "bdev_auto_examine": true, 00:19:01.839 "iobuf_small_cache_size": 128, 00:19:01.839 "iobuf_large_cache_size": 16 00:19:01.839 } 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "method": "bdev_raid_set_options", 00:19:01.839 "params": { 00:19:01.839 "process_window_size_kb": 1024 00:19:01.839 } 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "method": "bdev_iscsi_set_options", 00:19:01.839 "params": { 00:19:01.839 "timeout_sec": 30 00:19:01.839 } 00:19:01.839 }, 00:19:01.839 { 00:19:01.839 "method": "bdev_nvme_set_options", 00:19:01.839 "params": { 00:19:01.839 "action_on_timeout": "none", 00:19:01.839 "timeout_us": 0, 00:19:01.839 "timeout_admin_us": 0, 00:19:01.839 "keep_alive_timeout_ms": 10000, 00:19:01.839 "arbitration_burst": 0, 00:19:01.839 "low_priority_weight": 0, 00:19:01.839 "medium_priority_weight": 0, 00:19:01.839 "high_priority_weight": 0, 00:19:01.839 "nvme_adminq_poll_period_us": 10000, 00:19:01.839 "nvme_ioq_poll_period_us": 0, 00:19:01.839 "io_queue_requests": 512, 00:19:01.839 "delay_cmd_submit": true, 00:19:01.839 "transport_retry_count": 4, 00:19:01.839 "bdev_retry_count": 3, 00:19:01.839 "transport_ack_timeout": 0, 00:19:01.839 "ctrlr_loss_timeout_sec": 0, 00:19:01.839 "reconnect_delay_sec": 0, 00:19:01.839 "fast_io_fail_timeout_sec": 0, 00:19:01.839 "disable_auto_failback": false, 00:19:01.839 "generate_uuids": false, 00:19:01.839 "transport_tos": 0, 00:19:01.839 "nvme_error_stat": false, 00:19:01.839 "rdma_srq_size": 0, 00:19:01.839 "io_path_stat": false, 00:19:01.839 "allow_accel_sequence": false, 00:19:01.839 "rdma_max_cq_size": 0, 00:19:01.839 "rdma_cm_event_timeout_ms": 0, 00:19:01.839 "dhchap_digests": [ 00:19:01.839 "sha256", 00:19:01.839 "sha384", 00:19:01.839 "sha512" 00:19:01.839 ], 00:19:01.839 "dhchap_dhgroups": [ 00:19:01.839 "null", 00:19:01.839 "ffdhe2048", 00:19:01.839 "ffdhe3072", 00:19:01.839 "ffdhe4096", 00:19:01.839 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.839 he6144", 00:19:01.840 "ffdhe8192" 00:19:01.840 ] 00:19:01.840 } 00:19:01.840 }, 00:19:01.840 { 00:19:01.840 "method": "bdev_nvme_attach_controller", 00:19:01.840 "params": { 00:19:01.840 "name": "TLSTEST", 00:19:01.840 "trtype": "TCP", 00:19:01.840 "adrfam": "IPv4", 00:19:01.840 "traddr": "10.0.0.2", 00:19:01.840 "trsvcid": "4420", 00:19:01.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.840 "prchk_reftag": false, 00:19:01.840 "prchk_guard": false, 00:19:01.840 "ctrlr_loss_timeout_sec": 0, 00:19:01.840 "reconnect_delay_sec": 0, 00:19:01.840 "fast_io_fail_timeout_sec": 0, 00:19:01.840 "psk": "/tmp/tmp.WbnBCwb0Vw", 00:19:01.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.840 "hdgst": false, 00:19:01.840 "ddgst": false 00:19:01.840 } 00:19:01.840 }, 00:19:01.840 { 00:19:01.840 "method": "bdev_nvme_set_hotplug", 00:19:01.840 "params": { 00:19:01.840 "period_us": 100000, 00:19:01.840 "enable": false 00:19:01.840 } 00:19:01.840 }, 00:19:01.840 { 00:19:01.840 "method": "bdev_wait_for_examine" 00:19:01.840 } 00:19:01.840 ] 00:19:01.840 }, 00:19:01.840 { 00:19:01.840 "subsystem": "nbd", 00:19:01.840 "config": [] 00:19:01.840 } 00:19:01.840 ] 00:19:01.840 }' 00:19:01.840 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.840 11:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.840 [2024-07-15 11:45:09.579476] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:01.840 [2024-07-15 11:45:09.579564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051432 ] 00:19:01.840 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.840 [2024-07-15 11:45:09.638501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.840 [2024-07-15 11:45:09.745393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.098 [2024-07-15 11:45:09.920303] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.098 [2024-07-15 11:45:09.920420] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:02.662 11:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.662 11:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.662 11:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.919 Running I/O for 10 seconds... 00:19:12.882 00:19:12.882 Latency(us) 00:19:12.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.882 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.882 Verification LBA range: start 0x0 length 0x2000 00:19:12.882 TLSTESTn1 : 10.02 3615.59 14.12 0.00 0.00 35343.51 8252.68 51263.72 00:19:12.882 =================================================================================================================== 00:19:12.882 Total : 3615.59 14.12 0.00 0.00 35343.51 8252.68 51263.72 00:19:12.882 0 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3051432 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3051432 ']' 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3051432 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3051432 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3051432' 00:19:12.882 killing process with pid 3051432 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3051432 00:19:12.882 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.882 00:19:12.882 Latency(us) 00:19:12.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.882 =================================================================================================================== 00:19:12.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.882 [2024-07-15 11:45:20.766884] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:12.882 11:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3051432 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3051163 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3051163 ']' 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3051163 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3051163 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3051163' 00:19:13.138 killing process with pid 3051163 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3051163 00:19:13.138 [2024-07-15 11:45:21.058930] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:13.138 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3051163 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3052764 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3052764 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3052764 ']' 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.398 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.657 [2024-07-15 11:45:21.387825] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:13.657 [2024-07-15 11:45:21.387923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.657 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.657 [2024-07-15 11:45:21.451569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.657 [2024-07-15 11:45:21.550199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.657 [2024-07-15 11:45:21.550258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.657 [2024-07-15 11:45:21.550279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.657 [2024-07-15 11:45:21.550289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.657 [2024-07-15 11:45:21.550299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.657 [2024-07-15 11:45:21.550326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.WbnBCwb0Vw 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WbnBCwb0Vw 00:19:13.914 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.171 [2024-07-15 11:45:21.923147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.171 11:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.428 11:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.684 [2024-07-15 11:45:22.460572] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.684 [2024-07-15 11:45:22.460813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.684 11:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.941 malloc0 00:19:14.941 11:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:15.198 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WbnBCwb0Vw 00:19:15.455 [2024-07-15 11:45:23.241048] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3053041 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3053041 /var/tmp/bdevperf.sock 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3053041 ']' 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.455 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.455 [2024-07-15 11:45:23.303180] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:15.455 [2024-07-15 11:45:23.303270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053041 ] 00:19:15.455 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.455 [2024-07-15 11:45:23.362115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.712 [2024-07-15 11:45:23.471308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.712 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.712 11:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.712 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WbnBCwb0Vw 00:19:15.969 11:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:16.226 [2024-07-15 11:45:24.044489] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.226 nvme0n1 00:19:16.226 11:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.484 Running I/O for 1 seconds... 00:19:17.418 00:19:17.418 Latency(us) 00:19:17.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:17.418 Verification LBA range: start 0x0 length 0x2000 00:19:17.418 nvme0n1 : 1.02 3570.80 13.95 0.00 0.00 35504.68 5801.15 54370.61 00:19:17.418 =================================================================================================================== 00:19:17.418 Total : 3570.80 13.95 0.00 0.00 35504.68 5801.15 54370.61 00:19:17.418 0 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3053041 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3053041 ']' 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3053041 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3053041 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3053041' 00:19:17.418 killing process with pid 3053041 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3053041 00:19:17.418 Received shutdown signal, test time was about 1.000000 seconds 00:19:17.418 00:19:17.418 Latency(us) 00:19:17.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.418 =================================================================================================================== 00:19:17.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.418 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3053041 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3052764 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3052764 ']' 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3052764 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3052764 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3052764' 00:19:17.676 killing process with pid 3052764 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3052764 00:19:17.676 [2024-07-15 11:45:25.603990] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.676 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3052764 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3053326 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3053326 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3053326 ']' 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.936 11:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.195 [2024-07-15 11:45:25.935270] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:18.195 [2024-07-15 11:45:25.935369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.195 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.195 [2024-07-15 11:45:25.997982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.195 [2024-07-15 11:45:26.094859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.195 [2024-07-15 11:45:26.094918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.195 [2024-07-15 11:45:26.094941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.195 [2024-07-15 11:45:26.094953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.195 [2024-07-15 11:45:26.094962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.195 [2024-07-15 11:45:26.094988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.454 [2024-07-15 11:45:26.238143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.454 malloc0 00:19:18.454 [2024-07-15 11:45:26.270590] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.454 [2024-07-15 11:45:26.270869] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3053426 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3053426 /var/tmp/bdevperf.sock 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3053426 ']' 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.454 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.454 [2024-07-15 11:45:26.339877] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:18.454 [2024-07-15 11:45:26.339952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053426 ] 00:19:18.454 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.454 [2024-07-15 11:45:26.398025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.713 [2024-07-15 11:45:26.504863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.713 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.713 11:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.713 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WbnBCwb0Vw 00:19:18.970 11:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:19.227 [2024-07-15 11:45:27.124477] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.227 nvme0n1 00:19:19.227 11:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.484 Running I/O for 1 seconds... 00:19:20.418 00:19:20.418 Latency(us) 00:19:20.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:20.418 Verification LBA range: start 0x0 length 0x2000 00:19:20.418 nvme0n1 : 1.02 3630.85 14.18 0.00 0.00 34894.95 5825.42 33204.91 00:19:20.418 =================================================================================================================== 00:19:20.418 Total : 3630.85 14.18 0.00 0.00 34894.95 5825.42 33204.91 00:19:20.418 0 00:19:20.418 11:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:20.418 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.418 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.675 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.675 11:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:20.675 "subsystems": [ 00:19:20.675 { 00:19:20.675 "subsystem": "keyring", 00:19:20.675 "config": [ 00:19:20.675 { 00:19:20.675 "method": "keyring_file_add_key", 00:19:20.675 "params": { 00:19:20.675 "name": "key0", 00:19:20.675 "path": "/tmp/tmp.WbnBCwb0Vw" 00:19:20.675 } 00:19:20.675 } 00:19:20.675 ] 00:19:20.675 }, 00:19:20.676 { 00:19:20.676 "subsystem": "iobuf", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "iobuf_set_options", 00:19:20.676 "params": { 00:19:20.676 "small_pool_count": 8192, 00:19:20.676 "large_pool_count": 1024, 00:19:20.676 "small_bufsize": 8192, 00:19:20.676 "large_bufsize": 135168 00:19:20.676 } 00:19:20.676 } 00:19:20.676 ] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "sock", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "sock_set_default_impl", 00:19:20.676 "params": { 00:19:20.676 "impl_name": "posix" 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "sock_impl_set_options", 00:19:20.676 "params": { 00:19:20.676 "impl_name": "ssl", 00:19:20.676 "recv_buf_size": 4096, 00:19:20.676 "send_buf_size": 4096, 00:19:20.676 "enable_recv_pipe": true, 00:19:20.676 "enable_quickack": false, 00:19:20.676 "enable_placement_id": 0, 00:19:20.676 "enable_zerocopy_send_server": true, 00:19:20.676 "enable_zerocopy_send_client": false, 00:19:20.676 "zerocopy_threshold": 0, 00:19:20.676 "tls_version": 0, 00:19:20.676 "enable_ktls": false 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "sock_impl_set_options", 00:19:20.676 "params": { 00:19:20.676 "impl_name": "posix", 00:19:20.676 "recv_buf_size": 2097152, 00:19:20.676 "send_buf_size": 2097152, 00:19:20.676 "enable_recv_pipe": true, 00:19:20.676 "enable_quickack": false, 00:19:20.676 "enable_placement_id": 0, 00:19:20.676 "enable_zerocopy_send_server": true, 00:19:20.676 "enable_zerocopy_send_client": false, 00:19:20.676 "zerocopy_threshold": 0, 00:19:20.676 "tls_version": 0, 00:19:20.676 "enable_ktls": false 00:19:20.676 } 00:19:20.676 } 00:19:20.676 ] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "vmd", 00:19:20.676 "config": [] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "accel", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "accel_set_options", 00:19:20.676 "params": { 00:19:20.676 "small_cache_size": 128, 00:19:20.676 "large_cache_size": 16, 00:19:20.676 "task_count": 2048, 00:19:20.676 "sequence_count": 2048, 00:19:20.676 "buf_count": 2048 00:19:20.676 } 00:19:20.676 } 00:19:20.676 ] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "bdev", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "bdev_set_options", 00:19:20.676 "params": { 00:19:20.676 "bdev_io_pool_size": 65535, 00:19:20.676 "bdev_io_cache_size": 256, 00:19:20.676 "bdev_auto_examine": true, 00:19:20.676 "iobuf_small_cache_size": 128, 00:19:20.676 "iobuf_large_cache_size": 16 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_raid_set_options", 00:19:20.676 "params": { 00:19:20.676 "process_window_size_kb": 1024 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_iscsi_set_options", 00:19:20.676 "params": { 00:19:20.676 "timeout_sec": 30 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_nvme_set_options", 00:19:20.676 "params": { 00:19:20.676 "action_on_timeout": "none", 00:19:20.676 "timeout_us": 0, 00:19:20.676 "timeout_admin_us": 0, 00:19:20.676 "keep_alive_timeout_ms": 10000, 00:19:20.676 "arbitration_burst": 0, 00:19:20.676 "low_priority_weight": 0, 00:19:20.676 "medium_priority_weight": 0, 00:19:20.676 "high_priority_weight": 0, 00:19:20.676 "nvme_adminq_poll_period_us": 10000, 00:19:20.676 "nvme_ioq_poll_period_us": 0, 00:19:20.676 "io_queue_requests": 0, 00:19:20.676 "delay_cmd_submit": true, 00:19:20.676 "transport_retry_count": 4, 00:19:20.676 "bdev_retry_count": 3, 00:19:20.676 "transport_ack_timeout": 0, 00:19:20.676 "ctrlr_loss_timeout_sec": 0, 00:19:20.676 "reconnect_delay_sec": 0, 00:19:20.676 "fast_io_fail_timeout_sec": 0, 00:19:20.676 "disable_auto_failback": false, 00:19:20.676 "generate_uuids": false, 00:19:20.676 "transport_tos": 0, 00:19:20.676 "nvme_error_stat": false, 00:19:20.676 "rdma_srq_size": 0, 00:19:20.676 "io_path_stat": false, 00:19:20.676 "allow_accel_sequence": false, 00:19:20.676 "rdma_max_cq_size": 0, 00:19:20.676 "rdma_cm_event_timeout_ms": 0, 00:19:20.676 "dhchap_digests": [ 00:19:20.676 "sha256", 00:19:20.676 "sha384", 00:19:20.676 "sha512" 00:19:20.676 ], 00:19:20.676 "dhchap_dhgroups": [ 00:19:20.676 "null", 00:19:20.676 "ffdhe2048", 00:19:20.676 "ffdhe3072", 00:19:20.676 "ffdhe4096", 00:19:20.676 "ffdhe6144", 00:19:20.676 "ffdhe8192" 00:19:20.676 ] 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_nvme_set_hotplug", 00:19:20.676 "params": { 00:19:20.676 "period_us": 100000, 00:19:20.676 "enable": false 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_malloc_create", 00:19:20.676 "params": { 00:19:20.676 "name": "malloc0", 00:19:20.676 "num_blocks": 8192, 00:19:20.676 "block_size": 4096, 00:19:20.676 "physical_block_size": 4096, 00:19:20.676 "uuid": "4b3c833a-de8c-4bc1-8ae4-8bfb2f0f5214", 00:19:20.676 "optimal_io_boundary": 0 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "bdev_wait_for_examine" 00:19:20.676 } 00:19:20.676 ] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "nbd", 00:19:20.676 "config": [] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "scheduler", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "framework_set_scheduler", 00:19:20.676 "params": { 00:19:20.676 "name": "static" 00:19:20.676 } 00:19:20.676 } 00:19:20.676 ] 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "subsystem": "nvmf", 00:19:20.676 "config": [ 00:19:20.676 { 00:19:20.676 "method": "nvmf_set_config", 00:19:20.676 "params": { 00:19:20.676 "discovery_filter": "match_any", 00:19:20.676 "admin_cmd_passthru": { 00:19:20.676 "identify_ctrlr": false 00:19:20.676 } 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_set_max_subsystems", 00:19:20.676 "params": { 00:19:20.676 "max_subsystems": 1024 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_set_crdt", 00:19:20.676 "params": { 00:19:20.676 "crdt1": 0, 00:19:20.676 "crdt2": 0, 00:19:20.676 "crdt3": 0 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_create_transport", 00:19:20.676 "params": { 00:19:20.676 "trtype": "TCP", 00:19:20.676 "max_queue_depth": 128, 00:19:20.676 "max_io_qpairs_per_ctrlr": 127, 00:19:20.676 "in_capsule_data_size": 4096, 00:19:20.676 "max_io_size": 131072, 00:19:20.676 "io_unit_size": 131072, 00:19:20.676 "max_aq_depth": 128, 00:19:20.676 "num_shared_buffers": 511, 00:19:20.676 "buf_cache_size": 4294967295, 00:19:20.676 "dif_insert_or_strip": false, 00:19:20.676 "zcopy": false, 00:19:20.676 "c2h_success": false, 00:19:20.676 "sock_priority": 0, 00:19:20.676 "abort_timeout_sec": 1, 00:19:20.676 "ack_timeout": 0, 00:19:20.676 "data_wr_pool_size": 0 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_create_subsystem", 00:19:20.676 "params": { 00:19:20.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.676 "allow_any_host": false, 00:19:20.676 "serial_number": "00000000000000000000", 00:19:20.676 "model_number": "SPDK bdev Controller", 00:19:20.676 "max_namespaces": 32, 00:19:20.676 "min_cntlid": 1, 00:19:20.676 "max_cntlid": 65519, 00:19:20.676 "ana_reporting": false 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_subsystem_add_host", 00:19:20.676 "params": { 00:19:20.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.676 "host": "nqn.2016-06.io.spdk:host1", 00:19:20.676 "psk": "key0" 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_subsystem_add_ns", 00:19:20.676 "params": { 00:19:20.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.676 "namespace": { 00:19:20.676 "nsid": 1, 00:19:20.676 "bdev_name": "malloc0", 00:19:20.676 "nguid": "4B3C833ADE8C4BC18AE48BFB2F0F5214", 00:19:20.676 "uuid": "4b3c833a-de8c-4bc1-8ae4-8bfb2f0f5214", 00:19:20.676 "no_auto_visible": false 00:19:20.676 } 00:19:20.676 } 00:19:20.676 }, 00:19:20.676 { 00:19:20.676 "method": "nvmf_subsystem_add_listener", 00:19:20.676 "params": { 00:19:20.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.676 "listen_address": { 00:19:20.676 "trtype": "TCP", 00:19:20.676 "adrfam": "IPv4", 00:19:20.676 "traddr": "10.0.0.2", 00:19:20.676 "trsvcid": "4420" 00:19:20.676 }, 00:19:20.676 "secure_channel": true 00:19:20.676 } 00:19:20.676 } 00:19:20.676 ] 00:19:20.677 } 00:19:20.677 ] 00:19:20.677 }' 00:19:20.677 11:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:20.935 11:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:20.935 "subsystems": [ 00:19:20.935 { 00:19:20.935 "subsystem": "keyring", 00:19:20.935 "config": [ 00:19:20.935 { 00:19:20.935 "method": "keyring_file_add_key", 00:19:20.935 "params": { 00:19:20.935 "name": "key0", 00:19:20.935 "path": "/tmp/tmp.WbnBCwb0Vw" 00:19:20.935 } 00:19:20.935 } 00:19:20.935 ] 00:19:20.935 }, 00:19:20.935 { 00:19:20.935 "subsystem": "iobuf", 00:19:20.935 "config": [ 00:19:20.935 { 00:19:20.935 "method": "iobuf_set_options", 00:19:20.935 "params": { 00:19:20.935 "small_pool_count": 8192, 00:19:20.935 "large_pool_count": 1024, 00:19:20.935 "small_bufsize": 8192, 00:19:20.935 "large_bufsize": 135168 00:19:20.935 } 00:19:20.935 } 00:19:20.935 ] 00:19:20.935 }, 00:19:20.935 { 00:19:20.935 "subsystem": "sock", 00:19:20.935 "config": [ 00:19:20.935 { 00:19:20.935 "method": "sock_set_default_impl", 00:19:20.935 "params": { 00:19:20.935 "impl_name": "posix" 00:19:20.935 } 00:19:20.935 }, 00:19:20.935 { 00:19:20.935 "method": "sock_impl_set_options", 00:19:20.935 "params": { 00:19:20.935 "impl_name": "ssl", 00:19:20.935 "recv_buf_size": 4096, 00:19:20.935 "send_buf_size": 4096, 00:19:20.935 "enable_recv_pipe": true, 00:19:20.935 "enable_quickack": false, 00:19:20.935 "enable_placement_id": 0, 00:19:20.935 "enable_zerocopy_send_server": true, 00:19:20.935 "enable_zerocopy_send_client": false, 00:19:20.935 "zerocopy_threshold": 0, 00:19:20.935 "tls_version": 0, 00:19:20.935 "enable_ktls": false 00:19:20.935 } 00:19:20.935 }, 00:19:20.935 { 00:19:20.935 "method": "sock_impl_set_options", 00:19:20.935 "params": { 00:19:20.935 "impl_name": "posix", 00:19:20.935 "recv_buf_size": 2097152, 00:19:20.935 "send_buf_size": 2097152, 00:19:20.935 "enable_recv_pipe": true, 00:19:20.935 "enable_quickack": false, 00:19:20.935 "enable_placement_id": 0, 00:19:20.935 "enable_zerocopy_send_server": true, 00:19:20.936 "enable_zerocopy_send_client": false, 00:19:20.936 "zerocopy_threshold": 0, 00:19:20.936 "tls_version": 0, 00:19:20.936 "enable_ktls": false 00:19:20.936 } 00:19:20.936 } 00:19:20.936 ] 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "subsystem": "vmd", 00:19:20.936 "config": [] 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "subsystem": "accel", 00:19:20.936 "config": [ 00:19:20.936 { 00:19:20.936 "method": "accel_set_options", 00:19:20.936 "params": { 00:19:20.936 "small_cache_size": 128, 00:19:20.936 "large_cache_size": 16, 00:19:20.936 "task_count": 2048, 00:19:20.936 "sequence_count": 2048, 00:19:20.936 "buf_count": 2048 00:19:20.936 } 00:19:20.936 } 00:19:20.936 ] 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "subsystem": "bdev", 00:19:20.936 "config": [ 00:19:20.936 { 00:19:20.936 "method": "bdev_set_options", 00:19:20.936 "params": { 00:19:20.936 "bdev_io_pool_size": 65535, 00:19:20.936 "bdev_io_cache_size": 256, 00:19:20.936 "bdev_auto_examine": true, 00:19:20.936 "iobuf_small_cache_size": 128, 00:19:20.936 "iobuf_large_cache_size": 16 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_raid_set_options", 00:19:20.936 "params": { 00:19:20.936 "process_window_size_kb": 1024 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_iscsi_set_options", 00:19:20.936 "params": { 00:19:20.936 "timeout_sec": 30 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_nvme_set_options", 00:19:20.936 "params": { 00:19:20.936 "action_on_timeout": "none", 00:19:20.936 "timeout_us": 0, 00:19:20.936 "timeout_admin_us": 0, 00:19:20.936 "keep_alive_timeout_ms": 10000, 00:19:20.936 "arbitration_burst": 0, 00:19:20.936 "low_priority_weight": 0, 00:19:20.936 "medium_priority_weight": 0, 00:19:20.936 "high_priority_weight": 0, 00:19:20.936 "nvme_adminq_poll_period_us": 10000, 00:19:20.936 "nvme_ioq_poll_period_us": 0, 00:19:20.936 "io_queue_requests": 512, 00:19:20.936 "delay_cmd_submit": true, 00:19:20.936 "transport_retry_count": 4, 00:19:20.936 "bdev_retry_count": 3, 00:19:20.936 "transport_ack_timeout": 0, 00:19:20.936 "ctrlr_loss_timeout_sec": 0, 00:19:20.936 "reconnect_delay_sec": 0, 00:19:20.936 "fast_io_fail_timeout_sec": 0, 00:19:20.936 "disable_auto_failback": false, 00:19:20.936 "generate_uuids": false, 00:19:20.936 "transport_tos": 0, 00:19:20.936 "nvme_error_stat": false, 00:19:20.936 "rdma_srq_size": 0, 00:19:20.936 "io_path_stat": false, 00:19:20.936 "allow_accel_sequence": false, 00:19:20.936 "rdma_max_cq_size": 0, 00:19:20.936 "rdma_cm_event_timeout_ms": 0, 00:19:20.936 "dhchap_digests": [ 00:19:20.936 "sha256", 00:19:20.936 "sha384", 00:19:20.936 "sha512" 00:19:20.936 ], 00:19:20.936 "dhchap_dhgroups": [ 00:19:20.936 "null", 00:19:20.936 "ffdhe2048", 00:19:20.936 "ffdhe3072", 00:19:20.936 "ffdhe4096", 00:19:20.936 "ffdhe6144", 00:19:20.936 "ffdhe8192" 00:19:20.936 ] 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_nvme_attach_controller", 00:19:20.936 "params": { 00:19:20.936 "name": "nvme0", 00:19:20.936 "trtype": "TCP", 00:19:20.936 "adrfam": "IPv4", 00:19:20.936 "traddr": "10.0.0.2", 00:19:20.936 "trsvcid": "4420", 00:19:20.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.936 "prchk_reftag": false, 00:19:20.936 "prchk_guard": false, 00:19:20.936 "ctrlr_loss_timeout_sec": 0, 00:19:20.936 "reconnect_delay_sec": 0, 00:19:20.936 "fast_io_fail_timeout_sec": 0, 00:19:20.936 "psk": "key0", 00:19:20.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.936 "hdgst": false, 00:19:20.936 "ddgst": false 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_nvme_set_hotplug", 00:19:20.936 "params": { 00:19:20.936 "period_us": 100000, 00:19:20.936 "enable": false 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_enable_histogram", 00:19:20.936 "params": { 00:19:20.936 "name": "nvme0n1", 00:19:20.936 "enable": true 00:19:20.936 } 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "method": "bdev_wait_for_examine" 00:19:20.936 } 00:19:20.936 ] 00:19:20.936 }, 00:19:20.936 { 00:19:20.936 "subsystem": "nbd", 00:19:20.936 "config": [] 00:19:20.936 } 00:19:20.936 ] 00:19:20.936 }' 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3053426 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3053426 ']' 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3053426 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3053426 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3053426' 00:19:20.936 killing process with pid 3053426 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3053426 00:19:20.936 Received shutdown signal, test time was about 1.000000 seconds 00:19:20.936 00:19:20.936 Latency(us) 00:19:20.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.936 =================================================================================================================== 00:19:20.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.936 11:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3053426 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3053326 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3053326 ']' 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3053326 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3053326 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3053326' 00:19:21.194 killing process with pid 3053326 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3053326 00:19:21.194 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3053326 00:19:21.454 11:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:21.454 11:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.454 11:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:21.454 "subsystems": [ 00:19:21.454 { 00:19:21.454 "subsystem": "keyring", 00:19:21.454 "config": [ 00:19:21.454 { 00:19:21.454 "method": "keyring_file_add_key", 00:19:21.454 "params": { 00:19:21.454 "name": "key0", 00:19:21.454 "path": "/tmp/tmp.WbnBCwb0Vw" 00:19:21.454 } 00:19:21.454 } 00:19:21.454 ] 00:19:21.454 }, 00:19:21.454 { 00:19:21.454 "subsystem": "iobuf", 00:19:21.454 "config": [ 00:19:21.454 { 00:19:21.454 "method": "iobuf_set_options", 00:19:21.454 "params": { 00:19:21.454 "small_pool_count": 8192, 00:19:21.454 "large_pool_count": 1024, 00:19:21.454 "small_bufsize": 8192, 00:19:21.454 "large_bufsize": 135168 00:19:21.454 } 00:19:21.454 } 00:19:21.454 ] 00:19:21.454 }, 00:19:21.454 { 00:19:21.454 "subsystem": "sock", 00:19:21.454 "config": [ 00:19:21.454 { 00:19:21.454 "method": "sock_set_default_impl", 00:19:21.454 "params": { 00:19:21.454 "impl_name": "posix" 00:19:21.454 } 00:19:21.454 }, 00:19:21.454 { 00:19:21.454 "method": "sock_impl_set_options", 00:19:21.455 "params": { 00:19:21.455 "impl_name": "ssl", 00:19:21.455 "recv_buf_size": 4096, 00:19:21.455 "send_buf_size": 4096, 00:19:21.455 "enable_recv_pipe": true, 00:19:21.455 "enable_quickack": false, 00:19:21.455 "enable_placement_id": 0, 00:19:21.455 "enable_zerocopy_send_server": true, 00:19:21.455 "enable_zerocopy_send_client": false, 00:19:21.455 "zerocopy_threshold": 0, 00:19:21.455 "tls_version": 0, 00:19:21.455 "enable_ktls": false 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "sock_impl_set_options", 00:19:21.455 "params": { 00:19:21.455 "impl_name": "posix", 00:19:21.455 "recv_buf_size": 2097152, 00:19:21.455 "send_buf_size": 2097152, 00:19:21.455 "enable_recv_pipe": true, 00:19:21.455 "enable_quickack": false, 00:19:21.455 "enable_placement_id": 0, 00:19:21.455 "enable_zerocopy_send_server": true, 00:19:21.455 "enable_zerocopy_send_client": false, 00:19:21.455 "zerocopy_threshold": 0, 00:19:21.455 "tls_version": 0, 00:19:21.455 "enable_ktls": false 00:19:21.455 } 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "vmd", 00:19:21.455 "config": [] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "accel", 00:19:21.455 "config": [ 00:19:21.455 { 00:19:21.455 "method": "accel_set_options", 00:19:21.455 "params": { 00:19:21.455 "small_cache_size": 128, 00:19:21.455 "large_cache_size": 16, 00:19:21.455 "task_count": 2048, 00:19:21.455 "sequence_count": 2048, 00:19:21.455 "buf_count": 2048 00:19:21.455 } 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "bdev", 00:19:21.455 "config": [ 00:19:21.455 { 00:19:21.455 "method": "bdev_set_options", 00:19:21.455 "params": { 00:19:21.455 "bdev_io_pool_size": 65535, 00:19:21.455 "bdev_io_cache_size": 256, 00:19:21.455 "bdev_auto_examine": true, 00:19:21.455 "iobuf_small_cache_size": 128, 00:19:21.455 "iobuf_large_cache_size": 16 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_raid_set_options", 00:19:21.455 "params": { 00:19:21.455 "process_window_size_kb": 1024 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_iscsi_set_options", 00:19:21.455 "params": { 00:19:21.455 "timeout_sec": 30 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_nvme_set_options", 00:19:21.455 "params": { 00:19:21.455 "action_on_timeout": "none", 00:19:21.455 "timeout_us": 0, 00:19:21.455 "timeout_admin_us": 0, 00:19:21.455 "keep_alive_timeout_ms": 10000, 00:19:21.455 "arbitration_burst": 0, 00:19:21.455 "low_priority_weight": 0, 00:19:21.455 "medium_priority_weight": 0, 00:19:21.455 "high_priority_weight": 0, 00:19:21.455 "nvme_adminq_poll_period_us": 10000, 00:19:21.455 "nvme_ioq_poll_period_us": 0, 00:19:21.455 "io_queue_requests": 0, 00:19:21.455 "delay_cmd_submit": true, 00:19:21.455 "transport_retry_count": 4, 00:19:21.455 "bdev_retry_count": 3, 00:19:21.455 "transport_ack_timeout": 0, 00:19:21.455 "ctrlr_loss_timeout_sec": 0, 00:19:21.455 "reconnect_delay_sec": 0, 00:19:21.455 "fast_io_fail_timeout_sec": 0, 00:19:21.455 "disable_auto_failback": false, 00:19:21.455 "generate_uuids": false, 00:19:21.455 "transport_tos": 0, 00:19:21.455 "nvme_error_stat": false, 00:19:21.455 "rdma_srq_size": 0, 00:19:21.455 "io_path_stat": false, 00:19:21.455 "allow_accel_sequence": false, 00:19:21.455 "rdma_max_cq_size": 0, 00:19:21.455 "rdma_cm_event_timeout_ms": 0, 00:19:21.455 "dhchap_digests": [ 00:19:21.455 "sha256", 00:19:21.455 "sha384", 00:19:21.455 "sha512" 00:19:21.455 ], 00:19:21.455 "dhchap_dhgroups": [ 00:19:21.455 "null", 00:19:21.455 "ffdhe2048", 00:19:21.455 "ffdhe3072", 00:19:21.455 "ffdhe4096", 00:19:21.455 "ffdhe6144", 00:19:21.455 "ffdhe8192" 00:19:21.455 ] 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_nvme_set_hotplug", 00:19:21.455 "params": { 00:19:21.455 "period_us": 100000, 00:19:21.455 "enable": false 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_malloc_create", 00:19:21.455 "params": { 00:19:21.455 "name": "malloc0", 00:19:21.455 "num_blocks": 8192, 00:19:21.455 "block_size": 4096, 00:19:21.455 "physical_block_size": 4096, 00:19:21.455 "uuid": "4b3c833a-de8c-4bc1-8ae4-8bfb2f0f5214", 00:19:21.455 "optimal_io_boundary": 0 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "bdev_wait_for_examine" 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "nbd", 00:19:21.455 "config": [] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "scheduler", 00:19:21.455 "config": [ 00:19:21.455 { 00:19:21.455 "method": "framework_set_scheduler", 00:19:21.455 "params": { 00:19:21.455 "name": "static" 00:19:21.455 } 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "subsystem": "nvmf", 00:19:21.455 "config": [ 00:19:21.455 { 00:19:21.455 "method": "nvmf_set_config", 00:19:21.455 "params": { 00:19:21.455 "discovery_filter": "match_any", 00:19:21.455 "admin_cmd_passthru": { 00:19:21.455 "identify_ctrlr": false 00:19:21.455 } 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_set_max_subsystems", 00:19:21.455 "params": { 00:19:21.455 "max_subsystems": 1024 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_set_crdt", 00:19:21.455 "params": { 00:19:21.455 "crdt1": 0, 00:19:21.455 "crdt2": 0, 00:19:21.455 "crdt3": 0 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_create_transport", 00:19:21.455 "params": { 00:19:21.455 "trtype": "TCP", 00:19:21.455 "max_queue_depth": 128, 00:19:21.455 "max_io_qpairs_per_ctrlr": 127, 00:19:21.455 "in_capsule_data_size": 4096, 00:19:21.455 "max_io_size": 131072, 00:19:21.455 "io_unit_size": 131072, 00:19:21.455 "max_aq_depth": 128, 00:19:21.455 "num_shared_buffers": 511, 00:19:21.455 "buf_cache_size": 4294967295, 00:19:21.455 "dif_insert_or_strip": false, 00:19:21.455 "zcopy": false, 00:19:21.455 "c2h_success": false, 00:19:21.455 "sock_priority": 0, 00:19:21.455 "abort_timeout_sec": 1, 00:19:21.455 "ack_timeout": 0, 00:19:21.455 "data_wr_pool_size": 0 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_create_subsystem", 00:19:21.455 "params": { 00:19:21.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.455 "allow_any_host": false, 00:19:21.455 "serial_number": "00000000000000000000", 00:19:21.455 "model_number": "SPDK bdev Controller", 00:19:21.455 "max_namespaces": 32, 00:19:21.455 "min_cntlid": 1, 00:19:21.455 "max_cntlid": 65519, 00:19:21.455 "ana_reporting": false 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_subsystem_add_host", 00:19:21.455 "params": { 00:19:21.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.455 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.455 "psk": "key0" 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_subsystem_add_ns", 00:19:21.455 "params": { 00:19:21.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.455 "namespace": { 00:19:21.455 "nsid": 1, 00:19:21.455 "bdev_name": "malloc0", 00:19:21.455 "nguid": "4B3C833ADE8C4BC18AE48BFB2F0F5214", 00:19:21.455 "uuid": "4b3c833a-de8c-4bc1-8ae4-8bfb2f0f5214", 00:19:21.455 "no_auto_visible": false 00:19:21.455 } 00:19:21.455 } 00:19:21.455 }, 00:19:21.455 { 00:19:21.455 "method": "nvmf_subsystem_add_listener", 00:19:21.455 "params": { 00:19:21.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.455 "listen_address": { 00:19:21.455 "trtype": "TCP", 00:19:21.455 "adrfam": "IPv4", 00:19:21.455 "traddr": "10.0.0.2", 00:19:21.455 "trsvcid": "4420" 00:19:21.455 }, 00:19:21.455 "secure_channel": true 00:19:21.455 } 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 } 00:19:21.455 ] 00:19:21.455 }' 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3053764 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3053764 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3053764 ']' 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.455 11:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.714 [2024-07-15 11:45:29.445211] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:21.714 [2024-07-15 11:45:29.445298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.714 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.714 [2024-07-15 11:45:29.507922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.714 [2024-07-15 11:45:29.609083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.714 [2024-07-15 11:45:29.609141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.714 [2024-07-15 11:45:29.609156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.714 [2024-07-15 11:45:29.609179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.714 [2024-07-15 11:45:29.609189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.714 [2024-07-15 11:45:29.609273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.972 [2024-07-15 11:45:29.847157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.972 [2024-07-15 11:45:29.879181] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.972 [2024-07-15 11:45:29.888949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3053915 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3053915 /var/tmp/bdevperf.sock 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3053915 ']' 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.537 11:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:22.537 "subsystems": [ 00:19:22.537 { 00:19:22.537 "subsystem": "keyring", 00:19:22.537 "config": [ 00:19:22.537 { 00:19:22.537 "method": "keyring_file_add_key", 00:19:22.537 "params": { 00:19:22.537 "name": "key0", 00:19:22.537 "path": "/tmp/tmp.WbnBCwb0Vw" 00:19:22.537 } 00:19:22.537 } 00:19:22.537 ] 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "subsystem": "iobuf", 00:19:22.537 "config": [ 00:19:22.537 { 00:19:22.537 "method": "iobuf_set_options", 00:19:22.537 "params": { 00:19:22.537 "small_pool_count": 8192, 00:19:22.537 "large_pool_count": 1024, 00:19:22.537 "small_bufsize": 8192, 00:19:22.537 "large_bufsize": 135168 00:19:22.537 } 00:19:22.537 } 00:19:22.537 ] 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "subsystem": "sock", 00:19:22.537 "config": [ 00:19:22.537 { 00:19:22.537 "method": "sock_set_default_impl", 00:19:22.537 "params": { 00:19:22.537 "impl_name": "posix" 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "method": "sock_impl_set_options", 00:19:22.537 "params": { 00:19:22.537 "impl_name": "ssl", 00:19:22.537 "recv_buf_size": 4096, 00:19:22.537 "send_buf_size": 4096, 00:19:22.537 "enable_recv_pipe": true, 00:19:22.537 "enable_quickack": false, 00:19:22.537 "enable_placement_id": 0, 00:19:22.537 "enable_zerocopy_send_server": true, 00:19:22.537 "enable_zerocopy_send_client": false, 00:19:22.537 "zerocopy_threshold": 0, 00:19:22.537 "tls_version": 0, 00:19:22.537 "enable_ktls": false 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "method": "sock_impl_set_options", 00:19:22.537 "params": { 00:19:22.537 "impl_name": "posix", 00:19:22.537 "recv_buf_size": 2097152, 00:19:22.537 "send_buf_size": 2097152, 00:19:22.537 "enable_recv_pipe": true, 00:19:22.537 "enable_quickack": false, 00:19:22.537 "enable_placement_id": 0, 00:19:22.537 "enable_zerocopy_send_server": true, 00:19:22.537 "enable_zerocopy_send_client": false, 00:19:22.537 "zerocopy_threshold": 0, 00:19:22.537 "tls_version": 0, 00:19:22.537 "enable_ktls": false 00:19:22.537 } 00:19:22.537 } 00:19:22.537 ] 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "subsystem": "vmd", 00:19:22.537 "config": [] 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "subsystem": "accel", 00:19:22.537 "config": [ 00:19:22.537 { 00:19:22.537 "method": "accel_set_options", 00:19:22.537 "params": { 00:19:22.537 "small_cache_size": 128, 00:19:22.537 "large_cache_size": 16, 00:19:22.537 "task_count": 2048, 00:19:22.537 "sequence_count": 2048, 00:19:22.537 "buf_count": 2048 00:19:22.537 } 00:19:22.537 } 00:19:22.537 ] 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "subsystem": "bdev", 00:19:22.537 "config": [ 00:19:22.537 { 00:19:22.537 "method": "bdev_set_options", 00:19:22.537 "params": { 00:19:22.537 "bdev_io_pool_size": 65535, 00:19:22.537 "bdev_io_cache_size": 256, 00:19:22.537 "bdev_auto_examine": true, 00:19:22.537 "iobuf_small_cache_size": 128, 00:19:22.537 "iobuf_large_cache_size": 16 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "method": "bdev_raid_set_options", 00:19:22.537 "params": { 00:19:22.537 "process_window_size_kb": 1024 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "method": "bdev_iscsi_set_options", 00:19:22.537 "params": { 00:19:22.537 "timeout_sec": 30 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "method": "bdev_nvme_set_options", 00:19:22.537 "params": { 00:19:22.537 "action_on_timeout": "none", 00:19:22.537 "timeout_us": 0, 00:19:22.537 "timeout_admin_us": 0, 00:19:22.537 "keep_alive_timeout_ms": 10000, 00:19:22.537 "arbitration_burst": 0, 00:19:22.537 "low_priority_weight": 0, 00:19:22.537 "medium_priority_weight": 0, 00:19:22.537 "high_priority_weight": 0, 00:19:22.537 "nvme_adminq_poll_period_us": 10000, 00:19:22.537 "nvme_ioq_poll_period_us": 0, 00:19:22.537 "io_queue_requests": 512, 00:19:22.537 "delay_cmd_submit": true, 00:19:22.537 "transport_retry_count": 4, 00:19:22.537 "bdev_retry_count": 3, 00:19:22.537 "transport_ack_timeout": 0, 00:19:22.537 "ctrlr_loss_timeout_sec": 0, 00:19:22.537 "reconnect_delay_sec": 0, 00:19:22.537 "fast_io_fail_timeout_sec": 0, 00:19:22.537 "disable_auto_failback": false, 00:19:22.537 "generate_uuids": false, 00:19:22.537 "transport_tos": 0, 00:19:22.537 "nvme_error_stat": false, 00:19:22.537 "rdma_srq_size": 0, 00:19:22.537 "io_path_stat": false, 00:19:22.537 "allow_accel_sequence": false, 00:19:22.537 "rdma_max_cq_size": 0, 00:19:22.537 "rdma_cm_event_timeout_ms": 0, 00:19:22.538 "dhchap_digests": [ 00:19:22.538 "sha256", 00:19:22.538 "sha384", 00:19:22.538 "sha512" 00:19:22.538 ], 00:19:22.538 "dhchap_dhgroups": [ 00:19:22.538 "null", 00:19:22.538 "ffdhe2048", 00:19:22.538 "ffdhe3072", 00:19:22.538 "ffdhe4096", 00:19:22.538 "ffdhe6144", 00:19:22.538 "ffdhe8192" 00:19:22.538 ] 00:19:22.538 } 00:19:22.538 }, 00:19:22.538 { 00:19:22.538 "method": "bdev_nvme_attach_controller", 00:19:22.538 "params": { 00:19:22.538 "name": "nvme0", 00:19:22.538 "trtype": "TCP", 00:19:22.538 "adrfam": "IPv4", 00:19:22.538 "traddr": "10.0.0.2", 00:19:22.538 "trsvcid": "4420", 00:19:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.538 "prchk_reftag": false, 00:19:22.538 "prchk_guard": false, 00:19:22.538 "ctrlr_loss_timeout_sec": 0, 00:19:22.538 "reconnect_delay_sec": 0, 00:19:22.538 "fast_io_fail_timeout_sec": 0, 00:19:22.538 "psk": "key0", 00:19:22.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.538 "hdgst": false, 00:19:22.538 "ddgst": false 00:19:22.538 } 00:19:22.538 }, 00:19:22.538 { 00:19:22.538 "method": "bdev_nvme_set_hotplug", 00:19:22.538 "params": { 00:19:22.538 "period_us": 100000, 00:19:22.538 "enable": false 00:19:22.538 } 00:19:22.538 }, 00:19:22.538 { 00:19:22.538 "method": "bdev_enable_histogram", 00:19:22.538 "params": { 00:19:22.538 "name": "nvme0n1", 00:19:22.538 "enable": true 00:19:22.538 } 00:19:22.538 }, 00:19:22.538 { 00:19:22.538 "method": "bdev_wait_for_examine" 00:19:22.538 } 00:19:22.538 ] 00:19:22.538 }, 00:19:22.538 { 00:19:22.538 "subsystem": "nbd", 00:19:22.538 "config": [] 00:19:22.538 } 00:19:22.538 ] 00:19:22.538 }' 00:19:22.538 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.538 11:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.538 [2024-07-15 11:45:30.448894] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:22.538 [2024-07-15 11:45:30.448984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053915 ] 00:19:22.538 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.538 [2024-07-15 11:45:30.507059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.796 [2024-07-15 11:45:30.616157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.053 [2024-07-15 11:45:30.786991] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.618 11:45:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.618 11:45:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.618 11:45:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.618 11:45:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:23.876 11:45:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.876 11:45:31 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.876 Running I/O for 1 seconds... 00:19:25.291 00:19:25.291 Latency(us) 00:19:25.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.291 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.291 Verification LBA range: start 0x0 length 0x2000 00:19:25.291 nvme0n1 : 1.02 3592.81 14.03 0.00 0.00 35304.24 5776.88 30874.74 00:19:25.291 =================================================================================================================== 00:19:25.291 Total : 3592.81 14.03 0.00 0.00 35304.24 5776.88 30874.74 00:19:25.291 0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:25.291 nvmf_trace.0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3053915 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3053915 ']' 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3053915 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3053915 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3053915' 00:19:25.291 killing process with pid 3053915 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3053915 00:19:25.291 Received shutdown signal, test time was about 1.000000 seconds 00:19:25.291 00:19:25.291 Latency(us) 00:19:25.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.291 =================================================================================================================== 00:19:25.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.291 11:45:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3053915 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.291 rmmod nvme_tcp 00:19:25.291 rmmod nvme_fabrics 00:19:25.291 rmmod nvme_keyring 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3053764 ']' 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3053764 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3053764 ']' 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3053764 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3053764 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3053764' 00:19:25.291 killing process with pid 3053764 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3053764 00:19:25.291 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3053764 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.857 11:45:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.773 11:45:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.773 11:45:35 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sIinpP6axv /tmp/tmp.GhPFrzv0gs /tmp/tmp.WbnBCwb0Vw 00:19:27.773 00:19:27.773 real 1m20.160s 00:19:27.773 user 2m7.204s 00:19:27.773 sys 0m28.889s 00:19:27.773 11:45:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.773 11:45:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.773 ************************************ 00:19:27.773 END TEST nvmf_tls 00:19:27.773 ************************************ 00:19:27.773 11:45:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.773 11:45:35 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:27.773 11:45:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.773 11:45:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.773 11:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.773 ************************************ 00:19:27.773 START TEST nvmf_fips 00:19:27.773 ************************************ 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:27.773 * Looking for test storage... 00:19:27.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:27.773 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:27.774 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:28.032 Error setting digest 00:19:28.032 00124A9C2B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:28.032 00124A9C2B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.032 11:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:29.934 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:29.934 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.934 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:30.195 Found net devices under 0000:84:00.0: cvl_0_0 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:30.195 Found net devices under 0000:84:00.1: cvl_0_1 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.195 11:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:19:30.195 00:19:30.195 --- 10.0.0.2 ping statistics --- 00:19:30.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.195 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:19:30.195 00:19:30.195 --- 10.0.0.1 ping statistics --- 00:19:30.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.195 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3056290 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3056290 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3056290 ']' 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.195 11:45:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.195 [2024-07-15 11:45:38.165159] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:30.196 [2024-07-15 11:45:38.165259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.454 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.454 [2024-07-15 11:45:38.229331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.454 [2024-07-15 11:45:38.329431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.454 [2024-07-15 11:45:38.329482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.454 [2024-07-15 11:45:38.329505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.454 [2024-07-15 11:45:38.329516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.454 [2024-07-15 11:45:38.329525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.454 [2024-07-15 11:45:38.329555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.386 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.386 [2024-07-15 11:45:39.321567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.386 [2024-07-15 11:45:39.337526] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.386 [2024-07-15 11:45:39.337735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.386 [2024-07-15 11:45:39.368842] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:31.643 malloc0 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3056446 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3056446 /var/tmp/bdevperf.sock 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3056446 ']' 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.643 11:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.643 [2024-07-15 11:45:39.454467] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:31.643 [2024-07-15 11:45:39.454537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056446 ] 00:19:31.643 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.643 [2024-07-15 11:45:39.513806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.643 [2024-07-15 11:45:39.619367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.576 11:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.576 11:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:32.576 11:45:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:32.832 [2024-07-15 11:45:40.606898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.832 [2024-07-15 11:45:40.607042] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:32.832 TLSTESTn1 00:19:32.832 11:45:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.832 Running I/O for 10 seconds... 00:19:45.024 00:19:45.024 Latency(us) 00:19:45.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.024 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.024 Verification LBA range: start 0x0 length 0x2000 00:19:45.024 TLSTESTn1 : 10.02 3608.12 14.09 0.00 0.00 35416.54 8543.95 32622.36 00:19:45.024 =================================================================================================================== 00:19:45.024 Total : 3608.12 14.09 0.00 0.00 35416.54 8543.95 32622.36 00:19:45.024 0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:45.024 nvmf_trace.0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3056446 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3056446 ']' 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3056446 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3056446 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3056446' 00:19:45.024 killing process with pid 3056446 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3056446 00:19:45.024 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.024 00:19:45.024 Latency(us) 00:19:45.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.024 =================================================================================================================== 00:19:45.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.024 [2024-07-15 11:45:50.931374] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.024 11:45:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3056446 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.024 rmmod nvme_tcp 00:19:45.024 rmmod nvme_fabrics 00:19:45.024 rmmod nvme_keyring 00:19:45.024 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3056290 ']' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3056290 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3056290 ']' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3056290 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3056290 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3056290' 00:19:45.025 killing process with pid 3056290 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3056290 00:19:45.025 [2024-07-15 11:45:51.260367] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3056290 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.025 11:45:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.593 11:45:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.593 11:45:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:45.593 00:19:45.593 real 0m17.933s 00:19:45.593 user 0m22.649s 00:19:45.593 sys 0m6.641s 00:19:45.593 11:45:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.593 11:45:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.593 ************************************ 00:19:45.593 END TEST nvmf_fips 00:19:45.593 ************************************ 00:19:45.850 11:45:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.850 11:45:53 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:45.850 11:45:53 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:45.850 11:45:53 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:45.850 11:45:53 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:45.850 11:45:53 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.850 11:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:48.379 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:48.379 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:48.379 Found net devices under 0000:84:00.0: cvl_0_0 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:48.379 Found net devices under 0000:84:00.1: cvl_0_1 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:48.379 11:45:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:48.379 11:45:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:48.379 11:45:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.379 11:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.379 ************************************ 00:19:48.379 START TEST nvmf_perf_adq 00:19:48.379 ************************************ 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:48.379 * Looking for test storage... 00:19:48.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.379 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.380 11:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.281 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:50.282 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:50.282 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:50.282 Found net devices under 0000:84:00.0: cvl_0_0 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:50.282 Found net devices under 0000:84:00.1: cvl_0_1 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:50.282 11:45:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:50.850 11:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:52.803 11:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:58.090 11:46:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:58.090 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:58.091 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:58.091 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:58.091 Found net devices under 0000:84:00.0: cvl_0_0 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:58.091 Found net devices under 0000:84:00.1: cvl_0_1 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.091 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:19:58.092 00:19:58.092 --- 10.0.0.2 ping statistics --- 00:19:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.092 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:58.092 00:19:58.092 --- 10.0.0.1 ping statistics --- 00:19:58.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.092 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3062350 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3062350 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3062350 ']' 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 [2024-07-15 11:46:05.783200] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:58.092 [2024-07-15 11:46:05.783300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.092 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.092 [2024-07-15 11:46:05.850392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.092 [2024-07-15 11:46:05.964020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.092 [2024-07-15 11:46:05.964080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.092 [2024-07-15 11:46:05.964109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.092 [2024-07-15 11:46:05.964121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.092 [2024-07-15 11:46:05.964130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.092 [2024-07-15 11:46:05.964182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.092 [2024-07-15 11:46:05.964208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.092 [2024-07-15 11:46:05.964275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.092 [2024-07-15 11:46:05.964278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.092 11:46:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.092 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.093 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.350 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 [2024-07-15 11:46:06.185558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 Malloc1 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 [2024-07-15 11:46:06.239271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3062498 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:58.351 11:46:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:58.351 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.267 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:00.267 11:46:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.267 11:46:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.582 11:46:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.582 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:00.582 "tick_rate": 2700000000, 00:20:00.582 "poll_groups": [ 00:20:00.582 { 00:20:00.582 "name": "nvmf_tgt_poll_group_000", 00:20:00.582 "admin_qpairs": 1, 00:20:00.582 "io_qpairs": 1, 00:20:00.582 "current_admin_qpairs": 1, 00:20:00.582 "current_io_qpairs": 1, 00:20:00.582 "pending_bdev_io": 0, 00:20:00.582 "completed_nvme_io": 20309, 00:20:00.582 "transports": [ 00:20:00.582 { 00:20:00.582 "trtype": "TCP" 00:20:00.582 } 00:20:00.582 ] 00:20:00.582 }, 00:20:00.582 { 00:20:00.582 "name": "nvmf_tgt_poll_group_001", 00:20:00.582 "admin_qpairs": 0, 00:20:00.582 "io_qpairs": 1, 00:20:00.582 "current_admin_qpairs": 0, 00:20:00.583 "current_io_qpairs": 1, 00:20:00.583 "pending_bdev_io": 0, 00:20:00.583 "completed_nvme_io": 20583, 00:20:00.583 "transports": [ 00:20:00.583 { 00:20:00.583 "trtype": "TCP" 00:20:00.583 } 00:20:00.583 ] 00:20:00.583 }, 00:20:00.583 { 00:20:00.583 "name": "nvmf_tgt_poll_group_002", 00:20:00.583 "admin_qpairs": 0, 00:20:00.583 "io_qpairs": 1, 00:20:00.583 "current_admin_qpairs": 0, 00:20:00.583 "current_io_qpairs": 1, 00:20:00.583 "pending_bdev_io": 0, 00:20:00.583 "completed_nvme_io": 20627, 00:20:00.583 "transports": [ 00:20:00.583 { 00:20:00.583 "trtype": "TCP" 00:20:00.583 } 00:20:00.583 ] 00:20:00.583 }, 00:20:00.583 { 00:20:00.583 "name": "nvmf_tgt_poll_group_003", 00:20:00.583 "admin_qpairs": 0, 00:20:00.583 "io_qpairs": 1, 00:20:00.583 "current_admin_qpairs": 0, 00:20:00.583 "current_io_qpairs": 1, 00:20:00.583 "pending_bdev_io": 0, 00:20:00.583 "completed_nvme_io": 20223, 00:20:00.583 "transports": [ 00:20:00.583 { 00:20:00.583 "trtype": "TCP" 00:20:00.583 } 00:20:00.583 ] 00:20:00.583 } 00:20:00.583 ] 00:20:00.583 }' 00:20:00.583 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:00.583 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:00.583 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:00.583 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:00.583 11:46:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3062498 00:20:08.744 Initializing NVMe Controllers 00:20:08.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:08.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:08.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:08.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:08.745 Initialization complete. Launching workers. 00:20:08.745 ======================================================== 00:20:08.745 Latency(us) 00:20:08.745 Device Information : IOPS MiB/s Average min max 00:20:08.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10527.80 41.12 6080.11 2529.63 8951.20 00:20:08.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10751.50 42.00 5953.01 2165.64 8565.63 00:20:08.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10888.90 42.53 5878.84 2712.21 9261.62 00:20:08.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10643.10 41.57 6013.18 2874.90 8817.79 00:20:08.745 ======================================================== 00:20:08.745 Total : 42811.30 167.23 5980.36 2165.64 9261.62 00:20:08.745 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.745 rmmod nvme_tcp 00:20:08.745 rmmod nvme_fabrics 00:20:08.745 rmmod nvme_keyring 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3062350 ']' 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3062350 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3062350 ']' 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3062350 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3062350 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3062350' 00:20:08.745 killing process with pid 3062350 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3062350 00:20:08.745 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3062350 00:20:09.002 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.003 11:46:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.901 11:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.901 11:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:10.901 11:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:11.468 11:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:13.407 11:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:18.683 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:18.683 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:18.683 Found net devices under 0000:84:00.0: cvl_0_0 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:18.683 Found net devices under 0000:84:00.1: cvl_0_1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:20:18.683 00:20:18.683 --- 10.0.0.2 ping statistics --- 00:20:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.683 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:20:18.683 00:20:18.683 --- 10.0.0.1 ping statistics --- 00:20:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.683 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:18.683 net.core.busy_poll = 1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:18.683 net.core.busy_read = 1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:18.683 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3065117 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3065117 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3065117 ']' 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.941 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 [2024-07-15 11:46:26.739711] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:18.941 [2024-07-15 11:46:26.739832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.941 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.941 [2024-07-15 11:46:26.804438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.941 [2024-07-15 11:46:26.904754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.941 [2024-07-15 11:46:26.904810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.941 [2024-07-15 11:46:26.904832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.941 [2024-07-15 11:46:26.904843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.941 [2024-07-15 11:46:26.904852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.941 [2024-07-15 11:46:26.904934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.941 [2024-07-15 11:46:26.904996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.941 [2024-07-15 11:46:26.905061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.941 [2024-07-15 11:46:26.905063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:19.199 11:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 [2024-07-15 11:46:27.122810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 Malloc1 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.199 [2024-07-15 11:46:27.174334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3065156 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:19.199 11:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:19.456 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:21.352 "tick_rate": 2700000000, 00:20:21.352 "poll_groups": [ 00:20:21.352 { 00:20:21.352 "name": "nvmf_tgt_poll_group_000", 00:20:21.352 "admin_qpairs": 1, 00:20:21.352 "io_qpairs": 2, 00:20:21.352 "current_admin_qpairs": 1, 00:20:21.352 "current_io_qpairs": 2, 00:20:21.352 "pending_bdev_io": 0, 00:20:21.352 "completed_nvme_io": 26176, 00:20:21.352 "transports": [ 00:20:21.352 { 00:20:21.352 "trtype": "TCP" 00:20:21.352 } 00:20:21.352 ] 00:20:21.352 }, 00:20:21.352 { 00:20:21.352 "name": "nvmf_tgt_poll_group_001", 00:20:21.352 "admin_qpairs": 0, 00:20:21.352 "io_qpairs": 2, 00:20:21.352 "current_admin_qpairs": 0, 00:20:21.352 "current_io_qpairs": 2, 00:20:21.352 "pending_bdev_io": 0, 00:20:21.352 "completed_nvme_io": 26152, 00:20:21.352 "transports": [ 00:20:21.352 { 00:20:21.352 "trtype": "TCP" 00:20:21.352 } 00:20:21.352 ] 00:20:21.352 }, 00:20:21.352 { 00:20:21.352 "name": "nvmf_tgt_poll_group_002", 00:20:21.352 "admin_qpairs": 0, 00:20:21.352 "io_qpairs": 0, 00:20:21.352 "current_admin_qpairs": 0, 00:20:21.352 "current_io_qpairs": 0, 00:20:21.352 "pending_bdev_io": 0, 00:20:21.352 "completed_nvme_io": 0, 00:20:21.352 "transports": [ 00:20:21.352 { 00:20:21.352 "trtype": "TCP" 00:20:21.352 } 00:20:21.352 ] 00:20:21.352 }, 00:20:21.352 { 00:20:21.352 "name": "nvmf_tgt_poll_group_003", 00:20:21.352 "admin_qpairs": 0, 00:20:21.352 "io_qpairs": 0, 00:20:21.352 "current_admin_qpairs": 0, 00:20:21.352 "current_io_qpairs": 0, 00:20:21.352 "pending_bdev_io": 0, 00:20:21.352 "completed_nvme_io": 0, 00:20:21.352 "transports": [ 00:20:21.352 { 00:20:21.352 "trtype": "TCP" 00:20:21.352 } 00:20:21.352 ] 00:20:21.352 } 00:20:21.352 ] 00:20:21.352 }' 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:21.352 11:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3065156 00:20:29.452 Initializing NVMe Controllers 00:20:29.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:29.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:29.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:29.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:29.452 Initialization complete. Launching workers. 00:20:29.452 ======================================================== 00:20:29.452 Latency(us) 00:20:29.452 Device Information : IOPS MiB/s Average min max 00:20:29.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5846.90 22.84 10948.35 1724.86 53725.17 00:20:29.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7957.60 31.08 8044.22 1868.07 53020.53 00:20:29.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6411.40 25.04 9987.29 1776.60 53595.57 00:20:29.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7394.90 28.89 8684.65 1687.90 53983.88 00:20:29.452 ======================================================== 00:20:29.452 Total : 27610.80 107.85 9281.92 1687.90 53983.88 00:20:29.452 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.452 rmmod nvme_tcp 00:20:29.452 rmmod nvme_fabrics 00:20:29.452 rmmod nvme_keyring 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3065117 ']' 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3065117 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3065117 ']' 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3065117 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.452 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3065117 00:20:29.709 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:29.709 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:29.709 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3065117' 00:20:29.709 killing process with pid 3065117 00:20:29.709 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3065117 00:20:29.709 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3065117 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.968 11:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.254 11:46:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.254 11:46:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:33.254 00:20:33.254 real 0m45.001s 00:20:33.254 user 2m40.114s 00:20:33.254 sys 0m9.750s 00:20:33.254 11:46:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.254 11:46:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.254 ************************************ 00:20:33.254 END TEST nvmf_perf_adq 00:20:33.254 ************************************ 00:20:33.254 11:46:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:33.254 11:46:40 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:33.254 11:46:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:33.254 11:46:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.254 11:46:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.254 ************************************ 00:20:33.254 START TEST nvmf_shutdown 00:20:33.254 ************************************ 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:33.254 * Looking for test storage... 00:20:33.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.254 11:46:40 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:33.255 ************************************ 00:20:33.255 START TEST nvmf_shutdown_tc1 00:20:33.255 ************************************ 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.255 11:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:35.159 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:35.159 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:35.160 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:35.160 Found net devices under 0000:84:00.0: cvl_0_0 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:35.160 Found net devices under 0000:84:00.1: cvl_0_1 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.160 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:20:35.433 00:20:35.433 --- 10.0.0.2 ping statistics --- 00:20:35.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.433 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:20:35.433 00:20:35.433 --- 10.0.0.1 ping statistics --- 00:20:35.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.433 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3068460 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3068460 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3068460 ']' 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.433 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.433 [2024-07-15 11:46:43.298499] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:35.434 [2024-07-15 11:46:43.298565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.434 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.434 [2024-07-15 11:46:43.361319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.698 [2024-07-15 11:46:43.468262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.698 [2024-07-15 11:46:43.468317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.698 [2024-07-15 11:46:43.468341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.698 [2024-07-15 11:46:43.468352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.698 [2024-07-15 11:46:43.468361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.698 [2024-07-15 11:46:43.468505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.698 [2024-07-15 11:46:43.468574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.698 [2024-07-15 11:46:43.468641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.698 [2024-07-15 11:46:43.468645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.698 [2024-07-15 11:46:43.627676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.698 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.699 11:46:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.956 Malloc1 00:20:35.956 [2024-07-15 11:46:43.707810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.956 Malloc2 00:20:35.956 Malloc3 00:20:35.956 Malloc4 00:20:35.956 Malloc5 00:20:35.956 Malloc6 00:20:36.214 Malloc7 00:20:36.214 Malloc8 00:20:36.214 Malloc9 00:20:36.214 Malloc10 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3068632 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3068632 /var/tmp/bdevperf.sock 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3068632 ']' 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.214 { 00:20:36.214 "params": { 00:20:36.214 "name": "Nvme$subsystem", 00:20:36.214 "trtype": "$TEST_TRANSPORT", 00:20:36.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.214 "adrfam": "ipv4", 00:20:36.214 "trsvcid": "$NVMF_PORT", 00:20:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.214 "hdgst": ${hdgst:-false}, 00:20:36.214 "ddgst": ${ddgst:-false} 00:20:36.214 }, 00:20:36.214 "method": "bdev_nvme_attach_controller" 00:20:36.214 } 00:20:36.214 EOF 00:20:36.214 )") 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.214 { 00:20:36.214 "params": { 00:20:36.214 "name": "Nvme$subsystem", 00:20:36.214 "trtype": "$TEST_TRANSPORT", 00:20:36.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.214 "adrfam": "ipv4", 00:20:36.214 "trsvcid": "$NVMF_PORT", 00:20:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.214 "hdgst": ${hdgst:-false}, 00:20:36.214 "ddgst": ${ddgst:-false} 00:20:36.214 }, 00:20:36.214 "method": "bdev_nvme_attach_controller" 00:20:36.214 } 00:20:36.214 EOF 00:20:36.214 )") 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.214 { 00:20:36.214 "params": { 00:20:36.214 "name": "Nvme$subsystem", 00:20:36.214 "trtype": "$TEST_TRANSPORT", 00:20:36.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.214 "adrfam": "ipv4", 00:20:36.214 "trsvcid": "$NVMF_PORT", 00:20:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.214 "hdgst": ${hdgst:-false}, 00:20:36.214 "ddgst": ${ddgst:-false} 00:20:36.214 }, 00:20:36.214 "method": "bdev_nvme_attach_controller" 00:20:36.214 } 00:20:36.214 EOF 00:20:36.214 )") 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.214 { 00:20:36.214 "params": { 00:20:36.214 "name": "Nvme$subsystem", 00:20:36.214 "trtype": "$TEST_TRANSPORT", 00:20:36.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.214 "adrfam": "ipv4", 00:20:36.214 "trsvcid": "$NVMF_PORT", 00:20:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.214 "hdgst": ${hdgst:-false}, 00:20:36.214 "ddgst": ${ddgst:-false} 00:20:36.214 }, 00:20:36.214 "method": "bdev_nvme_attach_controller" 00:20:36.214 } 00:20:36.214 EOF 00:20:36.214 )") 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.214 { 00:20:36.214 "params": { 00:20:36.214 "name": "Nvme$subsystem", 00:20:36.214 "trtype": "$TEST_TRANSPORT", 00:20:36.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.214 "adrfam": "ipv4", 00:20:36.214 "trsvcid": "$NVMF_PORT", 00:20:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.214 "hdgst": ${hdgst:-false}, 00:20:36.214 "ddgst": ${ddgst:-false} 00:20:36.214 }, 00:20:36.214 "method": "bdev_nvme_attach_controller" 00:20:36.214 } 00:20:36.214 EOF 00:20:36.214 )") 00:20:36.214 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.215 { 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme$subsystem", 00:20:36.215 "trtype": "$TEST_TRANSPORT", 00:20:36.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "$NVMF_PORT", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.215 "hdgst": ${hdgst:-false}, 00:20:36.215 "ddgst": ${ddgst:-false} 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 } 00:20:36.215 EOF 00:20:36.215 )") 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.215 { 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme$subsystem", 00:20:36.215 "trtype": "$TEST_TRANSPORT", 00:20:36.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "$NVMF_PORT", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.215 "hdgst": ${hdgst:-false}, 00:20:36.215 "ddgst": ${ddgst:-false} 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 } 00:20:36.215 EOF 00:20:36.215 )") 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.215 { 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme$subsystem", 00:20:36.215 "trtype": "$TEST_TRANSPORT", 00:20:36.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "$NVMF_PORT", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.215 "hdgst": ${hdgst:-false}, 00:20:36.215 "ddgst": ${ddgst:-false} 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 } 00:20:36.215 EOF 00:20:36.215 )") 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.215 { 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme$subsystem", 00:20:36.215 "trtype": "$TEST_TRANSPORT", 00:20:36.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "$NVMF_PORT", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.215 "hdgst": ${hdgst:-false}, 00:20:36.215 "ddgst": ${ddgst:-false} 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 } 00:20:36.215 EOF 00:20:36.215 )") 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.215 { 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme$subsystem", 00:20:36.215 "trtype": "$TEST_TRANSPORT", 00:20:36.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "$NVMF_PORT", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.215 "hdgst": ${hdgst:-false}, 00:20:36.215 "ddgst": ${ddgst:-false} 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 } 00:20:36.215 EOF 00:20:36.215 )") 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:36.215 11:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme1", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme2", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme3", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme4", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme5", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme6", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme7", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme8", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme9", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 },{ 00:20:36.215 "params": { 00:20:36.215 "name": "Nvme10", 00:20:36.215 "trtype": "tcp", 00:20:36.215 "traddr": "10.0.0.2", 00:20:36.215 "adrfam": "ipv4", 00:20:36.215 "trsvcid": "4420", 00:20:36.215 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.215 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.215 "hdgst": false, 00:20:36.215 "ddgst": false 00:20:36.215 }, 00:20:36.215 "method": "bdev_nvme_attach_controller" 00:20:36.215 }' 00:20:36.215 [2024-07-15 11:46:44.197505] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:36.215 [2024-07-15 11:46:44.197593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:36.474 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.474 [2024-07-15 11:46:44.262187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.474 [2024-07-15 11:46:44.373486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3068632 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:38.372 11:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:39.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3068632 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3068460 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.305 { 00:20:39.305 "params": { 00:20:39.305 "name": "Nvme$subsystem", 00:20:39.305 "trtype": "$TEST_TRANSPORT", 00:20:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.305 "adrfam": "ipv4", 00:20:39.305 "trsvcid": "$NVMF_PORT", 00:20:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.305 "hdgst": ${hdgst:-false}, 00:20:39.305 "ddgst": ${ddgst:-false} 00:20:39.305 }, 00:20:39.305 "method": "bdev_nvme_attach_controller" 00:20:39.305 } 00:20:39.305 EOF 00:20:39.305 )") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.305 { 00:20:39.305 "params": { 00:20:39.305 "name": "Nvme$subsystem", 00:20:39.305 "trtype": "$TEST_TRANSPORT", 00:20:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.305 "adrfam": "ipv4", 00:20:39.305 "trsvcid": "$NVMF_PORT", 00:20:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.305 "hdgst": ${hdgst:-false}, 00:20:39.305 "ddgst": ${ddgst:-false} 00:20:39.305 }, 00:20:39.305 "method": "bdev_nvme_attach_controller" 00:20:39.305 } 00:20:39.305 EOF 00:20:39.305 )") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.305 { 00:20:39.305 "params": { 00:20:39.305 "name": "Nvme$subsystem", 00:20:39.305 "trtype": "$TEST_TRANSPORT", 00:20:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.305 "adrfam": "ipv4", 00:20:39.305 "trsvcid": "$NVMF_PORT", 00:20:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.305 "hdgst": ${hdgst:-false}, 00:20:39.305 "ddgst": ${ddgst:-false} 00:20:39.305 }, 00:20:39.305 "method": "bdev_nvme_attach_controller" 00:20:39.305 } 00:20:39.305 EOF 00:20:39.305 )") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.305 { 00:20:39.305 "params": { 00:20:39.305 "name": "Nvme$subsystem", 00:20:39.305 "trtype": "$TEST_TRANSPORT", 00:20:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.305 "adrfam": "ipv4", 00:20:39.305 "trsvcid": "$NVMF_PORT", 00:20:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.305 "hdgst": ${hdgst:-false}, 00:20:39.305 "ddgst": ${ddgst:-false} 00:20:39.305 }, 00:20:39.305 "method": "bdev_nvme_attach_controller" 00:20:39.305 } 00:20:39.305 EOF 00:20:39.305 )") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.305 { 00:20:39.305 "params": { 00:20:39.305 "name": "Nvme$subsystem", 00:20:39.305 "trtype": "$TEST_TRANSPORT", 00:20:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.305 "adrfam": "ipv4", 00:20:39.305 "trsvcid": "$NVMF_PORT", 00:20:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.305 "hdgst": ${hdgst:-false}, 00:20:39.305 "ddgst": ${ddgst:-false} 00:20:39.305 }, 00:20:39.305 "method": "bdev_nvme_attach_controller" 00:20:39.305 } 00:20:39.305 EOF 00:20:39.305 )") 00:20:39.305 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.306 { 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme$subsystem", 00:20:39.306 "trtype": "$TEST_TRANSPORT", 00:20:39.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "$NVMF_PORT", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.306 "hdgst": ${hdgst:-false}, 00:20:39.306 "ddgst": ${ddgst:-false} 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 } 00:20:39.306 EOF 00:20:39.306 )") 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.306 { 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme$subsystem", 00:20:39.306 "trtype": "$TEST_TRANSPORT", 00:20:39.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "$NVMF_PORT", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.306 "hdgst": ${hdgst:-false}, 00:20:39.306 "ddgst": ${ddgst:-false} 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 } 00:20:39.306 EOF 00:20:39.306 )") 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.306 { 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme$subsystem", 00:20:39.306 "trtype": "$TEST_TRANSPORT", 00:20:39.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "$NVMF_PORT", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.306 "hdgst": ${hdgst:-false}, 00:20:39.306 "ddgst": ${ddgst:-false} 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 } 00:20:39.306 EOF 00:20:39.306 )") 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.306 { 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme$subsystem", 00:20:39.306 "trtype": "$TEST_TRANSPORT", 00:20:39.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "$NVMF_PORT", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.306 "hdgst": ${hdgst:-false}, 00:20:39.306 "ddgst": ${ddgst:-false} 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 } 00:20:39.306 EOF 00:20:39.306 )") 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.306 { 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme$subsystem", 00:20:39.306 "trtype": "$TEST_TRANSPORT", 00:20:39.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "$NVMF_PORT", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.306 "hdgst": ${hdgst:-false}, 00:20:39.306 "ddgst": ${ddgst:-false} 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 } 00:20:39.306 EOF 00:20:39.306 )") 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:39.306 11:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme1", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme2", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme3", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme4", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme5", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme6", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme7", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme8", 00:20:39.306 "trtype": "tcp", 00:20:39.306 "traddr": "10.0.0.2", 00:20:39.306 "adrfam": "ipv4", 00:20:39.306 "trsvcid": "4420", 00:20:39.306 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:39.306 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:39.306 "hdgst": false, 00:20:39.306 "ddgst": false 00:20:39.306 }, 00:20:39.306 "method": "bdev_nvme_attach_controller" 00:20:39.306 },{ 00:20:39.306 "params": { 00:20:39.306 "name": "Nvme9", 00:20:39.307 "trtype": "tcp", 00:20:39.307 "traddr": "10.0.0.2", 00:20:39.307 "adrfam": "ipv4", 00:20:39.307 "trsvcid": "4420", 00:20:39.307 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:39.307 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:39.307 "hdgst": false, 00:20:39.307 "ddgst": false 00:20:39.307 }, 00:20:39.307 "method": "bdev_nvme_attach_controller" 00:20:39.307 },{ 00:20:39.307 "params": { 00:20:39.307 "name": "Nvme10", 00:20:39.307 "trtype": "tcp", 00:20:39.307 "traddr": "10.0.0.2", 00:20:39.307 "adrfam": "ipv4", 00:20:39.307 "trsvcid": "4420", 00:20:39.307 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:39.307 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:39.307 "hdgst": false, 00:20:39.307 "ddgst": false 00:20:39.307 }, 00:20:39.307 "method": "bdev_nvme_attach_controller" 00:20:39.307 }' 00:20:39.307 [2024-07-15 11:46:47.223915] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:39.307 [2024-07-15 11:46:47.223993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069054 ] 00:20:39.307 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.307 [2024-07-15 11:46:47.288657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.575 [2024-07-15 11:46:47.399913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.033 Running I/O for 1 seconds... 00:20:42.405 00:20:42.405 Latency(us) 00:20:42.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme1n1 : 1.12 228.93 14.31 0.00 0.00 275467.95 19126.80 243891.01 00:20:42.405 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme2n1 : 1.11 234.08 14.63 0.00 0.00 263949.88 14854.83 226803.11 00:20:42.405 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme3n1 : 1.12 232.81 14.55 0.00 0.00 261374.37 7136.14 260978.92 00:20:42.405 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme4n1 : 1.13 227.20 14.20 0.00 0.00 265273.27 17087.91 271853.04 00:20:42.405 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme5n1 : 1.15 222.73 13.92 0.00 0.00 266215.35 20000.62 268746.15 00:20:42.405 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme6n1 : 1.15 223.54 13.97 0.00 0.00 260589.61 21554.06 270299.59 00:20:42.405 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme7n1 : 1.14 229.92 14.37 0.00 0.00 248083.98 4369.07 256318.58 00:20:42.405 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme8n1 : 1.14 224.74 14.05 0.00 0.00 249896.96 21845.33 268746.15 00:20:42.405 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme9n1 : 1.16 221.00 13.81 0.00 0.00 250131.15 23107.51 267192.70 00:20:42.405 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.405 Verification LBA range: start 0x0 length 0x400 00:20:42.405 Nvme10n1 : 1.20 267.26 16.70 0.00 0.00 204448.96 7378.87 285834.05 00:20:42.405 =================================================================================================================== 00:20:42.405 Total : 2312.20 144.51 0.00 0.00 253339.75 4369.07 285834.05 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.663 rmmod nvme_tcp 00:20:42.663 rmmod nvme_fabrics 00:20:42.663 rmmod nvme_keyring 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3068460 ']' 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3068460 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3068460 ']' 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3068460 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068460 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068460' 00:20:42.663 killing process with pid 3068460 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3068460 00:20:42.663 11:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3068460 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.231 11:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.140 00:20:45.140 real 0m12.123s 00:20:45.140 user 0m35.135s 00:20:45.140 sys 0m3.343s 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.140 ************************************ 00:20:45.140 END TEST nvmf_shutdown_tc1 00:20:45.140 ************************************ 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.140 11:46:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:45.398 ************************************ 00:20:45.398 START TEST nvmf_shutdown_tc2 00:20:45.398 ************************************ 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:45.398 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:45.398 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:45.398 Found net devices under 0000:84:00.0: cvl_0_0 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:45.398 Found net devices under 0000:84:00.1: cvl_0_1 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:20:45.398 00:20:45.398 --- 10.0.0.2 ping statistics --- 00:20:45.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.398 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:45.398 00:20:45.398 --- 10.0.0.1 ping statistics --- 00:20:45.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.398 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.398 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3069819 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3069819 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3069819 ']' 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.399 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.399 [2024-07-15 11:46:53.363315] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:45.399 [2024-07-15 11:46:53.363394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.656 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.656 [2024-07-15 11:46:53.430878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.656 [2024-07-15 11:46:53.542884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.656 [2024-07-15 11:46:53.542946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.656 [2024-07-15 11:46:53.542960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.656 [2024-07-15 11:46:53.542972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.656 [2024-07-15 11:46:53.542981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.656 [2024-07-15 11:46:53.543134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.656 [2024-07-15 11:46:53.543196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.656 [2024-07-15 11:46:53.543263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:45.656 [2024-07-15 11:46:53.543265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.914 [2024-07-15 11:46:53.700430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.914 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.915 11:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.915 Malloc1 00:20:45.915 [2024-07-15 11:46:53.789953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.915 Malloc2 00:20:45.915 Malloc3 00:20:46.172 Malloc4 00:20:46.172 Malloc5 00:20:46.172 Malloc6 00:20:46.172 Malloc7 00:20:46.172 Malloc8 00:20:46.431 Malloc9 00:20:46.431 Malloc10 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3070002 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3070002 /var/tmp/bdevperf.sock 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3070002 ']' 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.431 "adrfam": "ipv4", 00:20:46.431 "trsvcid": "$NVMF_PORT", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.431 "hdgst": ${hdgst:-false}, 00:20:46.431 "ddgst": ${ddgst:-false} 00:20:46.431 }, 00:20:46.431 "method": "bdev_nvme_attach_controller" 00:20:46.431 } 00:20:46.431 EOF 00:20:46.431 )") 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.431 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.431 { 00:20:46.431 "params": { 00:20:46.431 "name": "Nvme$subsystem", 00:20:46.431 "trtype": "$TEST_TRANSPORT", 00:20:46.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "$NVMF_PORT", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.432 "hdgst": ${hdgst:-false}, 00:20:46.432 "ddgst": ${ddgst:-false} 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 } 00:20:46.432 EOF 00:20:46.432 )") 00:20:46.432 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:46.432 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:46.432 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:46.432 11:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme1", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme2", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme3", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme4", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme5", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme6", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme7", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme8", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme9", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 },{ 00:20:46.432 "params": { 00:20:46.432 "name": "Nvme10", 00:20:46.432 "trtype": "tcp", 00:20:46.432 "traddr": "10.0.0.2", 00:20:46.432 "adrfam": "ipv4", 00:20:46.432 "trsvcid": "4420", 00:20:46.432 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.432 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.432 "hdgst": false, 00:20:46.432 "ddgst": false 00:20:46.432 }, 00:20:46.432 "method": "bdev_nvme_attach_controller" 00:20:46.432 }' 00:20:46.432 [2024-07-15 11:46:54.303216] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:46.432 [2024-07-15 11:46:54.303305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070002 ] 00:20:46.432 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.432 [2024-07-15 11:46:54.367199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.690 [2024-07-15 11:46:54.478800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.587 Running I/O for 10 seconds... 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:48.587 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:48.845 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:49.103 11:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3070002 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3070002 ']' 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3070002 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3070002 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3070002' 00:20:49.103 killing process with pid 3070002 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3070002 00:20:49.103 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3070002 00:20:49.361 Received shutdown signal, test time was about 0.952141 seconds 00:20:49.361 00:20:49.361 Latency(us) 00:20:49.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme1n1 : 0.91 226.25 14.14 0.00 0.00 273834.95 9175.04 253211.69 00:20:49.361 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme2n1 : 0.92 230.46 14.40 0.00 0.00 262874.46 19612.25 251658.24 00:20:49.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme3n1 : 0.95 269.10 16.82 0.00 0.00 225603.51 17379.18 270299.59 00:20:49.361 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme4n1 : 0.95 270.15 16.88 0.00 0.00 219304.20 18835.53 260978.92 00:20:49.361 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme5n1 : 0.93 210.85 13.18 0.00 0.00 274506.33 3446.71 268746.15 00:20:49.361 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme6n1 : 0.94 203.76 12.73 0.00 0.00 279308.26 21651.15 278066.82 00:20:49.361 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme7n1 : 0.91 210.17 13.14 0.00 0.00 263350.30 22330.79 274959.93 00:20:49.361 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme8n1 : 0.92 213.35 13.33 0.00 0.00 251549.01 7961.41 265639.25 00:20:49.361 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme9n1 : 0.94 208.55 13.03 0.00 0.00 254045.02 4490.43 290494.39 00:20:49.361 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.361 Verification LBA range: start 0x0 length 0x400 00:20:49.361 Nvme10n1 : 0.93 206.92 12.93 0.00 0.00 250343.73 22622.06 246997.90 00:20:49.361 =================================================================================================================== 00:20:49.361 Total : 2249.56 140.60 0.00 0.00 253697.39 3446.71 290494.39 00:20:49.619 11:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.551 rmmod nvme_tcp 00:20:50.551 rmmod nvme_fabrics 00:20:50.551 rmmod nvme_keyring 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3069819 ']' 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3069819 ']' 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069819' 00:20:50.551 killing process with pid 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3069819 00:20:50.551 11:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3069819 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.116 11:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.640 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.640 00:20:53.641 real 0m7.983s 00:20:53.641 user 0m24.543s 00:20:53.641 sys 0m1.544s 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 ************************************ 00:20:53.641 END TEST nvmf_shutdown_tc2 00:20:53.641 ************************************ 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 ************************************ 00:20:53.641 START TEST nvmf_shutdown_tc3 00:20:53.641 ************************************ 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:53.641 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:53.641 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:53.641 Found net devices under 0000:84:00.0: cvl_0_0 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:53.641 Found net devices under 0000:84:00.1: cvl_0_1 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.641 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:20:53.642 00:20:53.642 --- 10.0.0.2 ping statistics --- 00:20:53.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.642 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:20:53.642 00:20:53.642 --- 10.0.0.1 ping statistics --- 00:20:53.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.642 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3070912 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3070912 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3070912 ']' 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.642 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.642 [2024-07-15 11:47:01.390303] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:53.642 [2024-07-15 11:47:01.390384] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.642 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.642 [2024-07-15 11:47:01.457910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.642 [2024-07-15 11:47:01.565944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.642 [2024-07-15 11:47:01.565999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.642 [2024-07-15 11:47:01.566037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.642 [2024-07-15 11:47:01.566049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.642 [2024-07-15 11:47:01.566059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.642 [2024-07-15 11:47:01.566141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.642 [2024-07-15 11:47:01.566165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.642 [2024-07-15 11:47:01.566223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.642 [2024-07-15 11:47:01.566226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.899 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.899 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 [2024-07-15 11:47:01.706422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.900 11:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 Malloc1 00:20:53.900 [2024-07-15 11:47:01.781192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.900 Malloc2 00:20:53.900 Malloc3 00:20:54.157 Malloc4 00:20:54.157 Malloc5 00:20:54.157 Malloc6 00:20:54.157 Malloc7 00:20:54.157 Malloc8 00:20:54.414 Malloc9 00:20:54.414 Malloc10 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3071091 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3071091 /var/tmp/bdevperf.sock 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3071091 ']' 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.414 { 00:20:54.414 "params": { 00:20:54.414 "name": "Nvme$subsystem", 00:20:54.414 "trtype": "$TEST_TRANSPORT", 00:20:54.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.414 "adrfam": "ipv4", 00:20:54.414 "trsvcid": "$NVMF_PORT", 00:20:54.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.414 "hdgst": ${hdgst:-false}, 00:20:54.414 "ddgst": ${ddgst:-false} 00:20:54.414 }, 00:20:54.414 "method": "bdev_nvme_attach_controller" 00:20:54.414 } 00:20:54.414 EOF 00:20:54.414 )") 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.414 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.415 { 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme$subsystem", 00:20:54.415 "trtype": "$TEST_TRANSPORT", 00:20:54.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "$NVMF_PORT", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.415 "hdgst": ${hdgst:-false}, 00:20:54.415 "ddgst": ${ddgst:-false} 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 } 00:20:54.415 EOF 00:20:54.415 )") 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.415 { 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme$subsystem", 00:20:54.415 "trtype": "$TEST_TRANSPORT", 00:20:54.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "$NVMF_PORT", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.415 "hdgst": ${hdgst:-false}, 00:20:54.415 "ddgst": ${ddgst:-false} 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 } 00:20:54.415 EOF 00:20:54.415 )") 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.415 { 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme$subsystem", 00:20:54.415 "trtype": "$TEST_TRANSPORT", 00:20:54.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "$NVMF_PORT", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.415 "hdgst": ${hdgst:-false}, 00:20:54.415 "ddgst": ${ddgst:-false} 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 } 00:20:54.415 EOF 00:20:54.415 )") 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.415 { 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme$subsystem", 00:20:54.415 "trtype": "$TEST_TRANSPORT", 00:20:54.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "$NVMF_PORT", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.415 "hdgst": ${hdgst:-false}, 00:20:54.415 "ddgst": ${ddgst:-false} 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 } 00:20:54.415 EOF 00:20:54.415 )") 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:54.415 11:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme1", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme2", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme3", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme4", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme5", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme6", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme7", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme8", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme9", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 },{ 00:20:54.415 "params": { 00:20:54.415 "name": "Nvme10", 00:20:54.415 "trtype": "tcp", 00:20:54.415 "traddr": "10.0.0.2", 00:20:54.415 "adrfam": "ipv4", 00:20:54.415 "trsvcid": "4420", 00:20:54.415 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:54.415 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:54.415 "hdgst": false, 00:20:54.415 "ddgst": false 00:20:54.415 }, 00:20:54.415 "method": "bdev_nvme_attach_controller" 00:20:54.415 }' 00:20:54.415 [2024-07-15 11:47:02.296482] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:54.415 [2024-07-15 11:47:02.296566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071091 ] 00:20:54.415 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.415 [2024-07-15 11:47:02.359904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.673 [2024-07-15 11:47:02.471608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.569 Running I/O for 10 seconds... 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=82 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 82 -ge 100 ']' 00:20:57.133 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.391 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=149 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 149 -ge 100 ']' 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3070912 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3070912 ']' 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3070912 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3070912 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3070912' 00:20:57.660 killing process with pid 3070912 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3070912 00:20:57.660 11:47:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3070912 00:20:57.660 [2024-07-15 11:47:05.424848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.660 [2024-07-15 11:47:05.424924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.660 [2024-07-15 11:47:05.424955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.660 [2024-07-15 11:47:05.424971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.660 [2024-07-15 11:47:05.424989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.660 [2024-07-15 11:47:05.424951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.660 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.660 [2024-07-15 11:47:05.425025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-15 11:47:05.425142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(5) to be set 00:20:57.661 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.661 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:20:57.661 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-07-15 11:47:05.425340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1the state(5) to be set 00:20:57.661 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.661 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-07-15 11:47:05.425563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 [2024-07-15 11:47:05.425662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.661 [2024-07-15 11:47:05.425675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.661 [2024-07-15 11:47:05.425687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-07-15 11:47:05.425688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.661 the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.425758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.425792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.425818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.425837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.425864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-07-15 11:47:05.425890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with [2024-07-15 11:47:05.425905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.662 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.425932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-07-15 11:47:05.425957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265460 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 the state(5) to be set 00:20:57.662 [2024-07-15 11:47:05.425971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.425987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.662 [2024-07-15 11:47:05.426757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.662 [2024-07-15 11:47:05.426773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.426979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.426992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.427008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.427023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.427038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.427063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.427167] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dea610 was disconnected and freed. reset controller. 00:20:57.663 [2024-07-15 11:47:05.429226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-07-15 11:47:05.429853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.429867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.663 [2024-07-15 11:47:05.429946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.663 [2024-07-15 11:47:05.429958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.663 [2024-07-15 11:47:05.429960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.429973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with [2024-07-15 11:47:05.429974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12the state(5) to be set 00:20:57.664 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.429991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.429993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with [2024-07-15 11:47:05.430152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12the state(5) to be set 00:20:57.664 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with [2024-07-15 11:47:05.430168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.664 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-07-15 11:47:05.430278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.430292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-07-15 11:47:05.430345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.430359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-07-15 11:47:05.430442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.430456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with [2024-07-15 11:47:05.430560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12the state(5) to be set 00:20:57.664 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with [2024-07-15 11:47:05.430575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.664 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 [2024-07-15 11:47:05.430592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.430610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22633c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.664 the state(5) to be set 00:20:57.664 [2024-07-15 11:47:05.430626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.664 [2024-07-15 11:47:05.430640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.430972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.430988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.665 [2024-07-15 11:47:05.431314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.431390] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1deb910 was disconnected and freed. reset controller. 00:20:57.665 [2024-07-15 11:47:05.431856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.431992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controlle[2024-07-15 11:47:05.432000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with r 00:20:57.665 the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968200 (9): Bad file descriptor 00:20:57.665 [2024-07-15 11:47:05.432094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.665 [2024-07-15 11:47:05.432144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with [2024-07-15 11:47:05.432156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:57.665 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.665 [2024-07-15 11:47:05.432171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.665 [2024-07-15 11:47:05.432184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2a690 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with [2024-07-15 11:47:05.432365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:20:57.666 id:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 11:47:05.432430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:57.666 the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 11:47:05.432445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46980 is same [2024-07-15 11:47:05.432460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with with the state(5) to be set 00:20:57.666 the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with [2024-07-15 11:47:05.432547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:57.666 id:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with [2024-07-15 11:47:05.432598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:57.666 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 11:47:05.432662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6eb0 is same [2024-07-15 11:47:05.432677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with with the state(5) to be set 00:20:57.666 the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.432726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263880 is same with [2024-07-15 11:47:05.432723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:57.666 id:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.666 [2024-07-15 11:47:05.432868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.666 [2024-07-15 11:47:05.432882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec90 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.666 [2024-07-15 11:47:05.434255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:57.667 [2024-07-15 11:47:05.434642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with [2024-07-15 11:47:05.434666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46980 (9): the state(5) to be set 00:20:57.667 Bad file descriptor 00:20:57.667 [2024-07-15 11:47:05.434683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.434949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2263d20 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.435820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.667 [2024-07-15 11:47:05.435849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1968200 with addr=10.0.0.2, port=4420 00:20:57.667 [2024-07-15 11:47:05.435866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968200 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.667 [2024-07-15 11:47:05.436587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.436952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22641c0 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438093] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.668 [2024-07-15 11:47:05.438194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.668 [2024-07-15 11:47:05.438277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46980 with addr=10.0.0.2, port=4420 00:20:57.668 [2024-07-15 11:47:05.438303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46980 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968200 (9): Bad file descriptor 00:20:57.668 [2024-07-15 11:47:05.438343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438434] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.668 [2024-07-15 11:47:05.438448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46980 (9): [2024-07-15 11:47:05.438864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with Bad file descriptor 00:20:57.668 the state(5) to be set 00:20:57.668 [2024-07-15 11:47:05.438884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.668 [2024-07-15 11:47:05.438899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.668 [2024-07-15 11:47:05.438916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.668 [2024-07-15 11:47:05.439022] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.668 [2024-07-15 11:47:05.439603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.668 [2024-07-15 11:47:05.439626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.668 [2024-07-15 11:47:05.439639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:57.668 [2024-07-15 11:47:05.439652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.668 [2024-07-15 11:47:05.440066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.668 [2024-07-15 11:47:05.440138] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.668 [2024-07-15 11:47:05.440999] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.668 [2024-07-15 11:47:05.442617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2a690 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.442676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1880 is same with the state(5) to be set 00:20:57.669 [2024-07-15 11:47:05.442918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.442982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.443039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187c610 is same with the state(5) to be set 00:20:57.669 [2024-07-15 11:47:05.443078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6eb0 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.443122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9ec90 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.443166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.443185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.443212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.443238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.669 [2024-07-15 11:47:05.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.443294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbe850 is same with the state(5) to be set 00:20:57.669 [2024-07-15 11:47:05.444835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.669 [2024-07-15 11:47:05.445035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.669 [2024-07-15 11:47:05.445061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1968200 with addr=10.0.0.2, port=4420 00:20:57.669 [2024-07-15 11:47:05.445077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968200 is same with the state(5) to be set 00:20:57.669 [2024-07-15 11:47:05.445141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968200 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.445189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.669 [2024-07-15 11:47:05.445205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.669 [2024-07-15 11:47:05.445218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.669 [2024-07-15 11:47:05.445265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.669 [2024-07-15 11:47:05.446009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:57.669 [2024-07-15 11:47:05.446334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.669 [2024-07-15 11:47:05.446359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46980 with addr=10.0.0.2, port=4420 00:20:57.669 [2024-07-15 11:47:05.446373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46980 is same with the state(5) to be set 00:20:57.669 [2024-07-15 11:47:05.446422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46980 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.446469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.669 [2024-07-15 11:47:05.446485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:57.669 [2024-07-15 11:47:05.446497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.669 [2024-07-15 11:47:05.446545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.669 [2024-07-15 11:47:05.452656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db1880 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.452707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187c610 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.452773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbe850 (9): Bad file descriptor 00:20:57.669 [2024-07-15 11:47:05.452895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.452918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.452939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.452955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.452987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.669 [2024-07-15 11:47:05.453312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.669 [2024-07-15 11:47:05.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.453978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.670 [2024-07-15 11:47:05.454656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.670 [2024-07-15 11:47:05.454670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.454952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.454967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1decda0 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.456904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.456943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.456984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.456993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.456996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.457007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:47:05.457009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.457022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with the state(5) to be set 00:20:57.671 [2024-07-15 11:47:05.457025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.671 [2024-07-15 11:47:05.457040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264660 is same with [2024-07-15 11:47:05.457040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:57.671 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.671 [2024-07-15 11:47:05.457058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.457469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.457485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.470793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.470920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.470938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.470952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.470969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.470984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.471633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.471649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6a130 is same with the state(5) to be set 00:20:57.672 [2024-07-15 11:47:05.473124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.473150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.473179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.473195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.473210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cea0 is same with the state(5) to be set 00:20:57.672 [2024-07-15 11:47:05.473287] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f1cea0 was disconnected and freed. reset controller. 00:20:57.672 [2024-07-15 11:47:05.473370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.473392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.672 [2024-07-15 11:47:05.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.672 [2024-07-15 11:47:05.473431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.473976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.473990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.673 [2024-07-15 11:47:05.474574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.673 [2024-07-15 11:47:05.474588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.474977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.474992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.475430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.475445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2f90 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.477601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:57.674 [2024-07-15 11:47:05.477634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:57.674 [2024-07-15 11:47:05.477663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:57.674 [2024-07-15 11:47:05.477866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.477890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.477906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.477924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.477939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.477953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.477967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.477981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.477995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39920 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.478049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.478070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.478086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.478100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.478115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.478129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.478144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.674 [2024-07-15 11:47:05.478158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.478171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39b00 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.478199] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.674 [2024-07-15 11:47:05.479200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:57.674 [2024-07-15 11:47:05.479429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.674 [2024-07-15 11:47:05.479460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da6eb0 with addr=10.0.0.2, port=4420 00:20:57.674 [2024-07-15 11:47:05.479477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6eb0 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.479659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.674 [2024-07-15 11:47:05.479689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9ec90 with addr=10.0.0.2, port=4420 00:20:57.674 [2024-07-15 11:47:05.479706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9ec90 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.479846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.674 [2024-07-15 11:47:05.479871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2a690 with addr=10.0.0.2, port=4420 00:20:57.674 [2024-07-15 11:47:05.479887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2a690 is same with the state(5) to be set 00:20:57.674 [2024-07-15 11:47:05.480510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.674 [2024-07-15 11:47:05.480534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.674 [2024-07-15 11:47:05.480556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.480974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.480988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.675 [2024-07-15 11:47:05.481921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.675 [2024-07-15 11:47:05.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.481951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.481967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.481982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.481998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.482541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.482557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d77350 is same with the state(5) to be set 00:20:57.676 [2024-07-15 11:47:05.483830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.483853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.483874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.483889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.483906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.483920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.483937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.483951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.483967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.483982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.483999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.676 [2024-07-15 11:47:05.484510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.676 [2024-07-15 11:47:05.484526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.484980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.484997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.677 [2024-07-15 11:47:05.485615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.677 [2024-07-15 11:47:05.485629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.678 [2024-07-15 11:47:05.485825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-07-15 11:47:05.485839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0fb70 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.487715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.678 [2024-07-15 11:47:05.487756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:57.678 [2024-07-15 11:47:05.487788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:57.678 task offset: 27520 on job bdev=Nvme1n1 fails 00:20:57.678 00:20:57.678 Latency(us) 00:20:57.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme1n1 ended in about 0.92 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme1n1 : 0.92 208.12 13.01 69.37 0.00 228027.73 4490.43 259425.47 00:20:57.678 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme2n1 ended in about 0.93 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme2n1 : 0.93 207.02 12.94 69.01 0.00 224629.10 5801.15 250104.79 00:20:57.678 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme3n1 ended in about 0.95 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme3n1 : 0.95 202.23 12.64 67.41 0.00 225521.40 18835.53 243891.01 00:20:57.678 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme4n1 ended in about 0.97 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme4n1 : 0.97 198.73 12.42 66.24 0.00 225145.36 18544.26 262532.36 00:20:57.678 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme5n1 ended in about 0.98 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme5n1 : 0.98 131.02 8.19 65.51 0.00 297747.09 19418.07 278066.82 00:20:57.678 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme6n1 ended in about 0.98 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme6n1 : 0.98 130.58 8.16 65.29 0.00 292974.74 19709.35 262532.36 00:20:57.678 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme7n1 ended in about 0.97 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme7n1 : 0.97 195.40 12.21 2.06 0.00 272273.76 19709.35 253211.69 00:20:57.678 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme8n1 : 0.93 206.16 12.88 0.00 0.00 264867.33 18252.99 237677.23 00:20:57.678 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme9n1 : 0.94 205.14 12.82 0.00 0.00 260509.39 24175.50 273406.48 00:20:57.678 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.678 Job: Nvme10n1 ended in about 0.97 seconds with error 00:20:57.678 Verification LBA range: start 0x0 length 0x400 00:20:57.678 Nvme10n1 : 0.97 131.98 8.25 65.99 0.00 265984.88 21651.15 288940.94 00:20:57.678 =================================================================================================================== 00:20:57.678 Total : 1816.36 113.52 470.88 0.00 252246.06 4490.43 288940.94 00:20:57.678 [2024-07-15 11:47:05.515405] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:57.678 [2024-07-15 11:47:05.515494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:57.678 [2024-07-15 11:47:05.515830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.678 [2024-07-15 11:47:05.515867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187c610 with addr=10.0.0.2, port=4420 00:20:57.678 [2024-07-15 11:47:05.515889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187c610 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.515918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6eb0 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.515942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9ec90 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.515961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2a690 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.516015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39920 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.516057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39b00 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.516744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.678 [2024-07-15 11:47:05.516776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1968200 with addr=10.0.0.2, port=4420 00:20:57.678 [2024-07-15 11:47:05.516796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968200 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.516915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.678 [2024-07-15 11:47:05.516941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46980 with addr=10.0.0.2, port=4420 00:20:57.678 [2024-07-15 11:47:05.516958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46980 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.517121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.678 [2024-07-15 11:47:05.517148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db1880 with addr=10.0.0.2, port=4420 00:20:57.678 [2024-07-15 11:47:05.517165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db1880 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.517315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.678 [2024-07-15 11:47:05.517341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbe850 with addr=10.0.0.2, port=4420 00:20:57.678 [2024-07-15 11:47:05.517357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbe850 is same with the state(5) to be set 00:20:57.678 [2024-07-15 11:47:05.517375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187c610 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.517405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:57.678 [2024-07-15 11:47:05.517420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:57.678 [2024-07-15 11:47:05.517436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:57.678 [2024-07-15 11:47:05.517457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:57.678 [2024-07-15 11:47:05.517472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:57.678 [2024-07-15 11:47:05.517485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:57.678 [2024-07-15 11:47:05.517504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:57.678 [2024-07-15 11:47:05.517519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:57.678 [2024-07-15 11:47:05.517533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:57.678 [2024-07-15 11:47:05.517555] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.678 [2024-07-15 11:47:05.517576] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.678 [2024-07-15 11:47:05.517595] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.678 [2024-07-15 11:47:05.517614] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.678 [2024-07-15 11:47:05.518268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.678 [2024-07-15 11:47:05.518295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.678 [2024-07-15 11:47:05.518309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.678 [2024-07-15 11:47:05.518334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968200 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.518356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46980 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.518374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db1880 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.518392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbe850 (9): Bad file descriptor 00:20:57.678 [2024-07-15 11:47:05.518407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:57.678 [2024-07-15 11:47:05.518421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:57.678 [2024-07-15 11:47:05.518435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:57.678 [2024-07-15 11:47:05.518503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:57.679 [2024-07-15 11:47:05.518527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:57.679 [2024-07-15 11:47:05.518544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.518579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.518597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.518610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.679 [2024-07-15 11:47:05.518627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.518642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.518656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.679 [2024-07-15 11:47:05.518673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.518687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.518701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:57.679 [2024-07-15 11:47:05.518717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.518730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.518755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:57.679 [2024-07-15 11:47:05.518812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.518832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.518844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.518856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.518999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.679 [2024-07-15 11:47:05.519025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39920 with addr=10.0.0.2, port=4420 00:20:57.679 [2024-07-15 11:47:05.519041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39920 is same with the state(5) to be set 00:20:57.679 [2024-07-15 11:47:05.519186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.679 [2024-07-15 11:47:05.519211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39b00 with addr=10.0.0.2, port=4420 00:20:57.679 [2024-07-15 11:47:05.519227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39b00 is same with the state(5) to be set 00:20:57.679 [2024-07-15 11:47:05.519271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39920 (9): Bad file descriptor 00:20:57.679 [2024-07-15 11:47:05.519295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39b00 (9): Bad file descriptor 00:20:57.679 [2024-07-15 11:47:05.519336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.519354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.519369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:57.679 [2024-07-15 11:47:05.519386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:57.679 [2024-07-15 11:47:05.519400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:57.679 [2024-07-15 11:47:05.519425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:57.679 [2024-07-15 11:47:05.519461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.679 [2024-07-15 11:47:05.519484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.247 11:47:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:58.247 11:47:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3071091 00:20:59.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3071091) - No such process 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.182 rmmod nvme_tcp 00:20:59.182 rmmod nvme_fabrics 00:20:59.182 rmmod nvme_keyring 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.182 11:47:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.717 00:21:01.717 real 0m7.996s 00:21:01.717 user 0m20.858s 00:21:01.717 sys 0m1.530s 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.717 ************************************ 00:21:01.717 END TEST nvmf_shutdown_tc3 00:21:01.717 ************************************ 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:01.717 00:21:01.717 real 0m28.338s 00:21:01.717 user 1m20.643s 00:21:01.717 sys 0m6.561s 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.717 11:47:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:01.717 ************************************ 00:21:01.717 END TEST nvmf_shutdown 00:21:01.717 ************************************ 00:21:01.717 11:47:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:01.717 11:47:09 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:01.717 11:47:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.717 11:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.717 11:47:09 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:01.717 11:47:09 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.717 11:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.718 11:47:09 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:01.718 11:47:09 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:01.718 11:47:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:01.718 11:47:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.718 11:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.718 ************************************ 00:21:01.718 START TEST nvmf_multicontroller 00:21:01.718 ************************************ 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:01.718 * Looking for test storage... 00:21:01.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.718 11:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:03.623 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:03.624 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:03.624 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:03.624 Found net devices under 0000:84:00.0: cvl_0_0 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:03.624 Found net devices under 0000:84:00.1: cvl_0_1 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:21:03.624 00:21:03.624 --- 10.0.0.2 ping statistics --- 00:21:03.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.624 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:21:03.624 00:21:03.624 --- 10.0.0.1 ping statistics --- 00:21:03.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.624 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:03.624 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3073639 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3073639 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3073639 ']' 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.926 11:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.926 [2024-07-15 11:47:11.673942] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:03.926 [2024-07-15 11:47:11.674018] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.926 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.926 [2024-07-15 11:47:11.740082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.926 [2024-07-15 11:47:11.846937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.926 [2024-07-15 11:47:11.846991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.926 [2024-07-15 11:47:11.847014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.926 [2024-07-15 11:47:11.847040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.926 [2024-07-15 11:47:11.847049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.926 [2024-07-15 11:47:11.847131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.926 [2024-07-15 11:47:11.847196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.926 [2024-07-15 11:47:11.847199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 [2024-07-15 11:47:12.685717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 Malloc0 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 [2024-07-15 11:47:12.745754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 [2024-07-15 11:47:12.753621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 Malloc1 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3073793 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3073793 /var/tmp/bdevperf.sock 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3073793 ']' 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.867 11:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 NVMe0n1 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.434 1 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 request: 00:21:05.434 { 00:21:05.434 "name": "NVMe0", 00:21:05.434 "trtype": "tcp", 00:21:05.434 "traddr": "10.0.0.2", 00:21:05.434 "adrfam": "ipv4", 00:21:05.434 "trsvcid": "4420", 00:21:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.434 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:05.434 "hostaddr": "10.0.0.2", 00:21:05.434 "hostsvcid": "60000", 00:21:05.434 "prchk_reftag": false, 00:21:05.434 "prchk_guard": false, 00:21:05.434 "hdgst": false, 00:21:05.434 "ddgst": false, 00:21:05.434 "method": "bdev_nvme_attach_controller", 00:21:05.434 "req_id": 1 00:21:05.434 } 00:21:05.434 Got JSON-RPC error response 00:21:05.434 response: 00:21:05.434 { 00:21:05.434 "code": -114, 00:21:05.434 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.434 } 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 request: 00:21:05.434 { 00:21:05.434 "name": "NVMe0", 00:21:05.434 "trtype": "tcp", 00:21:05.434 "traddr": "10.0.0.2", 00:21:05.434 "adrfam": "ipv4", 00:21:05.434 "trsvcid": "4420", 00:21:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.434 "hostaddr": "10.0.0.2", 00:21:05.434 "hostsvcid": "60000", 00:21:05.434 "prchk_reftag": false, 00:21:05.434 "prchk_guard": false, 00:21:05.434 "hdgst": false, 00:21:05.434 "ddgst": false, 00:21:05.434 "method": "bdev_nvme_attach_controller", 00:21:05.434 "req_id": 1 00:21:05.434 } 00:21:05.434 Got JSON-RPC error response 00:21:05.434 response: 00:21:05.434 { 00:21:05.434 "code": -114, 00:21:05.434 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.434 } 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.434 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.434 request: 00:21:05.434 { 00:21:05.434 "name": "NVMe0", 00:21:05.434 "trtype": "tcp", 00:21:05.434 "traddr": "10.0.0.2", 00:21:05.434 "adrfam": "ipv4", 00:21:05.434 "trsvcid": "4420", 00:21:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.434 "hostaddr": "10.0.0.2", 00:21:05.434 "hostsvcid": "60000", 00:21:05.434 "prchk_reftag": false, 00:21:05.434 "prchk_guard": false, 00:21:05.434 "hdgst": false, 00:21:05.434 "ddgst": false, 00:21:05.434 "multipath": "disable", 00:21:05.434 "method": "bdev_nvme_attach_controller", 00:21:05.434 "req_id": 1 00:21:05.435 } 00:21:05.435 Got JSON-RPC error response 00:21:05.435 response: 00:21:05.435 { 00:21:05.435 "code": -114, 00:21:05.435 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:05.435 } 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.435 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.692 request: 00:21:05.692 { 00:21:05.692 "name": "NVMe0", 00:21:05.692 "trtype": "tcp", 00:21:05.692 "traddr": "10.0.0.2", 00:21:05.692 "adrfam": "ipv4", 00:21:05.692 "trsvcid": "4420", 00:21:05.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.692 "hostaddr": "10.0.0.2", 00:21:05.692 "hostsvcid": "60000", 00:21:05.692 "prchk_reftag": false, 00:21:05.692 "prchk_guard": false, 00:21:05.692 "hdgst": false, 00:21:05.692 "ddgst": false, 00:21:05.692 "multipath": "failover", 00:21:05.692 "method": "bdev_nvme_attach_controller", 00:21:05.692 "req_id": 1 00:21:05.692 } 00:21:05.692 Got JSON-RPC error response 00:21:05.692 response: 00:21:05.692 { 00:21:05.692 "code": -114, 00:21:05.692 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.692 } 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.692 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.692 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.951 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:05.951 11:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.887 0 00:21:06.887 11:47:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:06.887 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.887 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3073793 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3073793 ']' 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3073793 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3073793 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.145 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3073793' 00:21:07.145 killing process with pid 3073793 00:21:07.146 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3073793 00:21:07.146 11:47:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3073793 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:07.404 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.404 [2024-07-15 11:47:12.860973] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:07.404 [2024-07-15 11:47:12.861079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073793 ] 00:21:07.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.404 [2024-07-15 11:47:12.920402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.404 [2024-07-15 11:47:13.028603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.404 [2024-07-15 11:47:13.726515] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0a519b45-daf4-465e-a3bf-2a34df9bfc9f already exists 00:21:07.404 [2024-07-15 11:47:13.726557] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0a519b45-daf4-465e-a3bf-2a34df9bfc9f alias for bdev NVMe1n1 00:21:07.404 [2024-07-15 11:47:13.726571] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:07.404 Running I/O for 1 seconds... 00:21:07.404 00:21:07.404 Latency(us) 00:21:07.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.404 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:07.404 NVMe0n1 : 1.00 19303.45 75.40 0.00 0.00 6620.38 1953.94 11699.39 00:21:07.404 =================================================================================================================== 00:21:07.404 Total : 19303.45 75.40 0.00 0.00 6620.38 1953.94 11699.39 00:21:07.404 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.404 00:21:07.404 Latency(us) 00:21:07.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.404 =================================================================================================================== 00:21:07.404 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.404 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.404 rmmod nvme_tcp 00:21:07.404 rmmod nvme_fabrics 00:21:07.404 rmmod nvme_keyring 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3073639 ']' 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3073639 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3073639 ']' 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3073639 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3073639 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3073639' 00:21:07.404 killing process with pid 3073639 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3073639 00:21:07.404 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3073639 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.662 11:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.197 11:47:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.197 00:21:10.197 real 0m8.354s 00:21:10.197 user 0m14.395s 00:21:10.197 sys 0m2.424s 00:21:10.197 11:47:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.197 11:47:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:10.197 ************************************ 00:21:10.197 END TEST nvmf_multicontroller 00:21:10.197 ************************************ 00:21:10.197 11:47:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:10.197 11:47:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.197 11:47:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.197 11:47:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.197 11:47:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.197 ************************************ 00:21:10.197 START TEST nvmf_aer 00:21:10.197 ************************************ 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.197 * Looking for test storage... 00:21:10.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.197 11:47:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:12.098 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:12.098 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:12.098 Found net devices under 0000:84:00.0: cvl_0_0 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:12.098 Found net devices under 0000:84:00.1: cvl_0_1 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:21:12.098 00:21:12.098 --- 10.0.0.2 ping statistics --- 00:21:12.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.098 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:21:12.098 00:21:12.098 --- 10.0.0.1 ping statistics --- 00:21:12.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.098 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:12.098 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3076021 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3076021 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3076021 ']' 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.099 11:47:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.099 [2024-07-15 11:47:19.965532] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:12.099 [2024-07-15 11:47:19.965614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.099 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.099 [2024-07-15 11:47:20.033270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.357 [2024-07-15 11:47:20.148950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.357 [2024-07-15 11:47:20.149007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.357 [2024-07-15 11:47:20.149036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.357 [2024-07-15 11:47:20.149057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.357 [2024-07-15 11:47:20.149067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.357 [2024-07-15 11:47:20.149187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.357 [2024-07-15 11:47:20.149249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.357 [2024-07-15 11:47:20.149302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.357 [2024-07-15 11:47:20.149305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.357 [2024-07-15 11:47:20.312733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.357 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.616 Malloc0 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.616 [2024-07-15 11:47:20.365009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.616 [ 00:21:12.616 { 00:21:12.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.616 "subtype": "Discovery", 00:21:12.616 "listen_addresses": [], 00:21:12.616 "allow_any_host": true, 00:21:12.616 "hosts": [] 00:21:12.616 }, 00:21:12.616 { 00:21:12.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.616 "subtype": "NVMe", 00:21:12.616 "listen_addresses": [ 00:21:12.616 { 00:21:12.616 "trtype": "TCP", 00:21:12.616 "adrfam": "IPv4", 00:21:12.616 "traddr": "10.0.0.2", 00:21:12.616 "trsvcid": "4420" 00:21:12.616 } 00:21:12.616 ], 00:21:12.616 "allow_any_host": true, 00:21:12.616 "hosts": [], 00:21:12.616 "serial_number": "SPDK00000000000001", 00:21:12.616 "model_number": "SPDK bdev Controller", 00:21:12.616 "max_namespaces": 2, 00:21:12.616 "min_cntlid": 1, 00:21:12.616 "max_cntlid": 65519, 00:21:12.616 "namespaces": [ 00:21:12.616 { 00:21:12.616 "nsid": 1, 00:21:12.616 "bdev_name": "Malloc0", 00:21:12.616 "name": "Malloc0", 00:21:12.616 "nguid": "0F20175A2F5D414E85E492B035C8C8B0", 00:21:12.616 "uuid": "0f20175a-2f5d-414e-85e4-92b035c8c8b0" 00:21:12.616 } 00:21:12.616 ] 00:21:12.616 } 00:21:12.616 ] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3076168 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:12.616 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.616 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 Malloc1 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 [ 00:21:12.875 { 00:21:12.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.875 "subtype": "Discovery", 00:21:12.875 "listen_addresses": [], 00:21:12.875 "allow_any_host": true, 00:21:12.875 "hosts": [] 00:21:12.875 }, 00:21:12.875 { 00:21:12.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.875 "subtype": "NVMe", 00:21:12.875 "listen_addresses": [ 00:21:12.875 { 00:21:12.875 "trtype": "TCP", 00:21:12.875 "adrfam": "IPv4", 00:21:12.875 "traddr": "10.0.0.2", 00:21:12.875 "trsvcid": "4420" 00:21:12.875 } 00:21:12.875 ], 00:21:12.875 "allow_any_host": true, 00:21:12.875 "hosts": [], 00:21:12.875 "serial_number": "SPDK00000000000001", 00:21:12.875 "model_number": "SPDK bdev Controller", 00:21:12.875 "max_namespaces": 2, 00:21:12.875 "min_cntlid": 1, 00:21:12.875 "max_cntlid": 65519, 00:21:12.875 "namespaces": [ 00:21:12.875 { 00:21:12.875 "nsid": 1, 00:21:12.875 "bdev_name": "Malloc0", 00:21:12.875 "name": "Malloc0", 00:21:12.875 "nguid": "0F20175A2F5D414E85E492B035C8C8B0", 00:21:12.875 "uuid": "0f20175a-2f5d-414e-85e4-92b035c8c8b0" 00:21:12.875 }, 00:21:12.875 { 00:21:12.875 "nsid": 2, 00:21:12.875 "bdev_name": "Malloc1", 00:21:12.875 "name": "Malloc1", 00:21:12.875 "nguid": "322B9DBE838D4F2E956E8272174ECEC0", 00:21:12.875 "uuid": "322b9dbe-838d-4f2e-956e-8272174ecec0" 00:21:12.875 } 00:21:12.875 ] 00:21:12.875 } 00:21:12.875 ] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3076168 00:21:12.875 Asynchronous Event Request test 00:21:12.875 Attaching to 10.0.0.2 00:21:12.875 Attached to 10.0.0.2 00:21:12.875 Registering asynchronous event callbacks... 00:21:12.875 Starting namespace attribute notice tests for all controllers... 00:21:12.875 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:12.875 aer_cb - Changed Namespace 00:21:12.875 Cleaning up... 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.875 rmmod nvme_tcp 00:21:12.875 rmmod nvme_fabrics 00:21:12.875 rmmod nvme_keyring 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3076021 ']' 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3076021 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3076021 ']' 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3076021 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3076021 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3076021' 00:21:12.875 killing process with pid 3076021 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3076021 00:21:12.875 11:47:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3076021 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.134 11:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.675 11:47:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.675 00:21:15.675 real 0m5.423s 00:21:15.675 user 0m4.124s 00:21:15.675 sys 0m1.963s 00:21:15.675 11:47:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.675 11:47:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:15.675 ************************************ 00:21:15.675 END TEST nvmf_aer 00:21:15.675 ************************************ 00:21:15.675 11:47:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:15.675 11:47:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.675 11:47:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:15.675 11:47:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.675 11:47:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.675 ************************************ 00:21:15.675 START TEST nvmf_async_init 00:21:15.675 ************************************ 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.675 * Looking for test storage... 00:21:15.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f5f0fcd4f3b04a33ba2be702e194a609 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.675 11:47:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:17.582 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:17.582 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:17.582 Found net devices under 0000:84:00.0: cvl_0_0 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.582 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:17.583 Found net devices under 0000:84:00.1: cvl_0_1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:17.583 00:21:17.583 --- 10.0.0.2 ping statistics --- 00:21:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.583 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:17.583 00:21:17.583 --- 10.0.0.1 ping statistics --- 00:21:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.583 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3078115 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3078115 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3078115 ']' 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.583 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:17.842 [2024-07-15 11:47:25.587271] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:17.842 [2024-07-15 11:47:25.587369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.842 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.842 [2024-07-15 11:47:25.654630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.842 [2024-07-15 11:47:25.769765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.842 [2024-07-15 11:47:25.769833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.842 [2024-07-15 11:47:25.769848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.842 [2024-07-15 11:47:25.769876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.843 [2024-07-15 11:47:25.769887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.843 [2024-07-15 11:47:25.769914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.103 [2024-07-15 11:47:25.901650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.103 null0 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.103 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f5f0fcd4f3b04a33ba2be702e194a609 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.104 [2024-07-15 11:47:25.941926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.104 11:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.365 nvme0n1 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.365 [ 00:21:18.365 { 00:21:18.365 "name": "nvme0n1", 00:21:18.365 "aliases": [ 00:21:18.365 "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609" 00:21:18.365 ], 00:21:18.365 "product_name": "NVMe disk", 00:21:18.365 "block_size": 512, 00:21:18.365 "num_blocks": 2097152, 00:21:18.365 "uuid": "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609", 00:21:18.365 "assigned_rate_limits": { 00:21:18.365 "rw_ios_per_sec": 0, 00:21:18.365 "rw_mbytes_per_sec": 0, 00:21:18.365 "r_mbytes_per_sec": 0, 00:21:18.365 "w_mbytes_per_sec": 0 00:21:18.365 }, 00:21:18.365 "claimed": false, 00:21:18.365 "zoned": false, 00:21:18.365 "supported_io_types": { 00:21:18.365 "read": true, 00:21:18.365 "write": true, 00:21:18.365 "unmap": false, 00:21:18.365 "flush": true, 00:21:18.365 "reset": true, 00:21:18.365 "nvme_admin": true, 00:21:18.365 "nvme_io": true, 00:21:18.365 "nvme_io_md": false, 00:21:18.365 "write_zeroes": true, 00:21:18.365 "zcopy": false, 00:21:18.365 "get_zone_info": false, 00:21:18.365 "zone_management": false, 00:21:18.365 "zone_append": false, 00:21:18.365 "compare": true, 00:21:18.365 "compare_and_write": true, 00:21:18.365 "abort": true, 00:21:18.365 "seek_hole": false, 00:21:18.365 "seek_data": false, 00:21:18.365 "copy": true, 00:21:18.365 "nvme_iov_md": false 00:21:18.365 }, 00:21:18.365 "memory_domains": [ 00:21:18.365 { 00:21:18.365 "dma_device_id": "system", 00:21:18.365 "dma_device_type": 1 00:21:18.365 } 00:21:18.365 ], 00:21:18.365 "driver_specific": { 00:21:18.365 "nvme": [ 00:21:18.365 { 00:21:18.365 "trid": { 00:21:18.365 "trtype": "TCP", 00:21:18.365 "adrfam": "IPv4", 00:21:18.365 "traddr": "10.0.0.2", 00:21:18.365 "trsvcid": "4420", 00:21:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.365 }, 00:21:18.365 "ctrlr_data": { 00:21:18.365 "cntlid": 1, 00:21:18.365 "vendor_id": "0x8086", 00:21:18.365 "model_number": "SPDK bdev Controller", 00:21:18.365 "serial_number": "00000000000000000000", 00:21:18.365 "firmware_revision": "24.09", 00:21:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.365 "oacs": { 00:21:18.365 "security": 0, 00:21:18.365 "format": 0, 00:21:18.365 "firmware": 0, 00:21:18.365 "ns_manage": 0 00:21:18.365 }, 00:21:18.365 "multi_ctrlr": true, 00:21:18.365 "ana_reporting": false 00:21:18.365 }, 00:21:18.365 "vs": { 00:21:18.365 "nvme_version": "1.3" 00:21:18.365 }, 00:21:18.365 "ns_data": { 00:21:18.365 "id": 1, 00:21:18.365 "can_share": true 00:21:18.365 } 00:21:18.365 } 00:21:18.365 ], 00:21:18.365 "mp_policy": "active_passive" 00:21:18.365 } 00:21:18.365 } 00:21:18.365 ] 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.365 [2024-07-15 11:47:26.194409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.365 [2024-07-15 11:47:26.194504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f45c0 (9): Bad file descriptor 00:21:18.365 [2024-07-15 11:47:26.338881] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.365 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.365 [ 00:21:18.365 { 00:21:18.365 "name": "nvme0n1", 00:21:18.365 "aliases": [ 00:21:18.365 "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609" 00:21:18.625 ], 00:21:18.625 "product_name": "NVMe disk", 00:21:18.625 "block_size": 512, 00:21:18.625 "num_blocks": 2097152, 00:21:18.625 "uuid": "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609", 00:21:18.625 "assigned_rate_limits": { 00:21:18.625 "rw_ios_per_sec": 0, 00:21:18.625 "rw_mbytes_per_sec": 0, 00:21:18.625 "r_mbytes_per_sec": 0, 00:21:18.625 "w_mbytes_per_sec": 0 00:21:18.625 }, 00:21:18.625 "claimed": false, 00:21:18.625 "zoned": false, 00:21:18.625 "supported_io_types": { 00:21:18.625 "read": true, 00:21:18.625 "write": true, 00:21:18.625 "unmap": false, 00:21:18.625 "flush": true, 00:21:18.625 "reset": true, 00:21:18.625 "nvme_admin": true, 00:21:18.625 "nvme_io": true, 00:21:18.625 "nvme_io_md": false, 00:21:18.625 "write_zeroes": true, 00:21:18.625 "zcopy": false, 00:21:18.625 "get_zone_info": false, 00:21:18.625 "zone_management": false, 00:21:18.625 "zone_append": false, 00:21:18.625 "compare": true, 00:21:18.625 "compare_and_write": true, 00:21:18.625 "abort": true, 00:21:18.625 "seek_hole": false, 00:21:18.625 "seek_data": false, 00:21:18.625 "copy": true, 00:21:18.625 "nvme_iov_md": false 00:21:18.625 }, 00:21:18.625 "memory_domains": [ 00:21:18.625 { 00:21:18.625 "dma_device_id": "system", 00:21:18.625 "dma_device_type": 1 00:21:18.625 } 00:21:18.625 ], 00:21:18.625 "driver_specific": { 00:21:18.625 "nvme": [ 00:21:18.625 { 00:21:18.625 "trid": { 00:21:18.625 "trtype": "TCP", 00:21:18.625 "adrfam": "IPv4", 00:21:18.625 "traddr": "10.0.0.2", 00:21:18.625 "trsvcid": "4420", 00:21:18.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.625 }, 00:21:18.625 "ctrlr_data": { 00:21:18.625 "cntlid": 2, 00:21:18.625 "vendor_id": "0x8086", 00:21:18.625 "model_number": "SPDK bdev Controller", 00:21:18.625 "serial_number": "00000000000000000000", 00:21:18.625 "firmware_revision": "24.09", 00:21:18.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.625 "oacs": { 00:21:18.625 "security": 0, 00:21:18.625 "format": 0, 00:21:18.625 "firmware": 0, 00:21:18.625 "ns_manage": 0 00:21:18.625 }, 00:21:18.625 "multi_ctrlr": true, 00:21:18.625 "ana_reporting": false 00:21:18.625 }, 00:21:18.625 "vs": { 00:21:18.625 "nvme_version": "1.3" 00:21:18.625 }, 00:21:18.625 "ns_data": { 00:21:18.625 "id": 1, 00:21:18.625 "can_share": true 00:21:18.625 } 00:21:18.625 } 00:21:18.625 ], 00:21:18.625 "mp_policy": "active_passive" 00:21:18.625 } 00:21:18.625 } 00:21:18.625 ] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fadJuMhF9E 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fadJuMhF9E 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 [2024-07-15 11:47:26.395085] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.625 [2024-07-15 11:47:26.395273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fadJuMhF9E 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 [2024-07-15 11:47:26.403103] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fadJuMhF9E 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 [2024-07-15 11:47:26.411141] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.625 [2024-07-15 11:47:26.411212] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.625 nvme0n1 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.625 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.625 [ 00:21:18.625 { 00:21:18.625 "name": "nvme0n1", 00:21:18.625 "aliases": [ 00:21:18.625 "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609" 00:21:18.625 ], 00:21:18.625 "product_name": "NVMe disk", 00:21:18.625 "block_size": 512, 00:21:18.625 "num_blocks": 2097152, 00:21:18.625 "uuid": "f5f0fcd4-f3b0-4a33-ba2b-e702e194a609", 00:21:18.625 "assigned_rate_limits": { 00:21:18.625 "rw_ios_per_sec": 0, 00:21:18.625 "rw_mbytes_per_sec": 0, 00:21:18.625 "r_mbytes_per_sec": 0, 00:21:18.625 "w_mbytes_per_sec": 0 00:21:18.625 }, 00:21:18.625 "claimed": false, 00:21:18.625 "zoned": false, 00:21:18.625 "supported_io_types": { 00:21:18.625 "read": true, 00:21:18.625 "write": true, 00:21:18.625 "unmap": false, 00:21:18.625 "flush": true, 00:21:18.625 "reset": true, 00:21:18.625 "nvme_admin": true, 00:21:18.625 "nvme_io": true, 00:21:18.625 "nvme_io_md": false, 00:21:18.625 "write_zeroes": true, 00:21:18.625 "zcopy": false, 00:21:18.625 "get_zone_info": false, 00:21:18.625 "zone_management": false, 00:21:18.625 "zone_append": false, 00:21:18.625 "compare": true, 00:21:18.625 "compare_and_write": true, 00:21:18.625 "abort": true, 00:21:18.625 "seek_hole": false, 00:21:18.625 "seek_data": false, 00:21:18.625 "copy": true, 00:21:18.625 "nvme_iov_md": false 00:21:18.625 }, 00:21:18.625 "memory_domains": [ 00:21:18.625 { 00:21:18.625 "dma_device_id": "system", 00:21:18.625 "dma_device_type": 1 00:21:18.625 } 00:21:18.625 ], 00:21:18.625 "driver_specific": { 00:21:18.625 "nvme": [ 00:21:18.625 { 00:21:18.625 "trid": { 00:21:18.625 "trtype": "TCP", 00:21:18.625 "adrfam": "IPv4", 00:21:18.625 "traddr": "10.0.0.2", 00:21:18.625 "trsvcid": "4421", 00:21:18.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.625 }, 00:21:18.625 "ctrlr_data": { 00:21:18.625 "cntlid": 3, 00:21:18.625 "vendor_id": "0x8086", 00:21:18.625 "model_number": "SPDK bdev Controller", 00:21:18.625 "serial_number": "00000000000000000000", 00:21:18.625 "firmware_revision": "24.09", 00:21:18.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.625 "oacs": { 00:21:18.626 "security": 0, 00:21:18.626 "format": 0, 00:21:18.626 "firmware": 0, 00:21:18.626 "ns_manage": 0 00:21:18.626 }, 00:21:18.626 "multi_ctrlr": true, 00:21:18.626 "ana_reporting": false 00:21:18.626 }, 00:21:18.626 "vs": { 00:21:18.626 "nvme_version": "1.3" 00:21:18.626 }, 00:21:18.626 "ns_data": { 00:21:18.626 "id": 1, 00:21:18.626 "can_share": true 00:21:18.626 } 00:21:18.626 } 00:21:18.626 ], 00:21:18.626 "mp_policy": "active_passive" 00:21:18.626 } 00:21:18.626 } 00:21:18.626 ] 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.fadJuMhF9E 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.626 rmmod nvme_tcp 00:21:18.626 rmmod nvme_fabrics 00:21:18.626 rmmod nvme_keyring 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3078115 ']' 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3078115 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3078115 ']' 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3078115 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3078115 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3078115' 00:21:18.626 killing process with pid 3078115 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3078115 00:21:18.626 [2024-07-15 11:47:26.592988] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.626 [2024-07-15 11:47:26.593047] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.626 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3078115 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.885 11:47:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.424 11:47:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.424 00:21:21.424 real 0m5.730s 00:21:21.424 user 0m2.157s 00:21:21.424 sys 0m1.968s 00:21:21.424 11:47:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.424 11:47:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.424 ************************************ 00:21:21.424 END TEST nvmf_async_init 00:21:21.424 ************************************ 00:21:21.424 11:47:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:21.424 11:47:28 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.424 11:47:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:21.424 11:47:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.424 11:47:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.424 ************************************ 00:21:21.424 START TEST dma 00:21:21.424 ************************************ 00:21:21.424 11:47:28 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.424 * Looking for test storage... 00:21:21.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.424 11:47:28 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.424 11:47:28 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.424 11:47:29 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.424 11:47:29 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.424 11:47:29 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.424 11:47:29 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:21.424 11:47:29 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.424 11:47:29 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.424 11:47:29 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:21.424 11:47:29 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:21.424 00:21:21.424 real 0m0.073s 00:21:21.424 user 0m0.034s 00:21:21.424 sys 0m0.045s 00:21:21.424 11:47:29 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.424 11:47:29 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:21.424 ************************************ 00:21:21.424 END TEST dma 00:21:21.424 ************************************ 00:21:21.424 11:47:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:21.424 11:47:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.424 11:47:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:21.424 11:47:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.424 11:47:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.424 ************************************ 00:21:21.424 START TEST nvmf_identify 00:21:21.424 ************************************ 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.424 * Looking for test storage... 00:21:21.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.424 11:47:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.425 11:47:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:23.329 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:23.329 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.329 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:23.330 Found net devices under 0000:84:00.0: cvl_0_0 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:23.330 Found net devices under 0000:84:00.1: cvl_0_1 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:21:23.330 00:21:23.330 --- 10.0.0.2 ping statistics --- 00:21:23.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.330 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:23.330 00:21:23.330 --- 10.0.0.1 ping statistics --- 00:21:23.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.330 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.330 11:47:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3080260 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3080260 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3080260 ']' 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.589 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.589 [2024-07-15 11:47:31.375514] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:23.589 [2024-07-15 11:47:31.375585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.589 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.589 [2024-07-15 11:47:31.438904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.589 [2024-07-15 11:47:31.550847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.589 [2024-07-15 11:47:31.550903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.589 [2024-07-15 11:47:31.550933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.589 [2024-07-15 11:47:31.550944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.589 [2024-07-15 11:47:31.550954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.589 [2024-07-15 11:47:31.551041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.589 [2024-07-15 11:47:31.551087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.589 [2024-07-15 11:47:31.551333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.589 [2024-07-15 11:47:31.551337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 [2024-07-15 11:47:31.682357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 Malloc0 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 [2024-07-15 11:47:31.759550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.848 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.848 [ 00:21:23.848 { 00:21:23.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:23.848 "subtype": "Discovery", 00:21:23.848 "listen_addresses": [ 00:21:23.848 { 00:21:23.848 "trtype": "TCP", 00:21:23.848 "adrfam": "IPv4", 00:21:23.848 "traddr": "10.0.0.2", 00:21:23.848 "trsvcid": "4420" 00:21:23.848 } 00:21:23.848 ], 00:21:23.848 "allow_any_host": true, 00:21:23.848 "hosts": [] 00:21:23.848 }, 00:21:23.848 { 00:21:23.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.848 "subtype": "NVMe", 00:21:23.848 "listen_addresses": [ 00:21:23.848 { 00:21:23.848 "trtype": "TCP", 00:21:23.848 "adrfam": "IPv4", 00:21:23.848 "traddr": "10.0.0.2", 00:21:23.848 "trsvcid": "4420" 00:21:23.848 } 00:21:23.848 ], 00:21:23.848 "allow_any_host": true, 00:21:23.848 "hosts": [], 00:21:23.848 "serial_number": "SPDK00000000000001", 00:21:23.848 "model_number": "SPDK bdev Controller", 00:21:23.848 "max_namespaces": 32, 00:21:23.848 "min_cntlid": 1, 00:21:23.848 "max_cntlid": 65519, 00:21:23.848 "namespaces": [ 00:21:23.848 { 00:21:23.848 "nsid": 1, 00:21:23.848 "bdev_name": "Malloc0", 00:21:23.848 "name": "Malloc0", 00:21:23.849 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:23.849 "eui64": "ABCDEF0123456789", 00:21:23.849 "uuid": "cb726df3-13cd-4c75-b843-396da3b1fa97" 00:21:23.849 } 00:21:23.849 ] 00:21:23.849 } 00:21:23.849 ] 00:21:23.849 11:47:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.849 11:47:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:23.849 [2024-07-15 11:47:31.801897] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:23.849 [2024-07-15 11:47:31.801940] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080404 ] 00:21:23.849 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.849 [2024-07-15 11:47:31.833666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:23.849 [2024-07-15 11:47:31.833766] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:23.849 [2024-07-15 11:47:31.833779] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:23.849 [2024-07-15 11:47:31.833807] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:23.849 [2024-07-15 11:47:31.833817] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.110 [2024-07-15 11:47:31.837817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:24.110 [2024-07-15 11:47:31.837874] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x101b540 0 00:21:24.110 [2024-07-15 11:47:31.844752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.110 [2024-07-15 11:47:31.844800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.110 [2024-07-15 11:47:31.844810] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.110 [2024-07-15 11:47:31.844816] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.110 [2024-07-15 11:47:31.844871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.844885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.844892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.110 [2024-07-15 11:47:31.844911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.110 [2024-07-15 11:47:31.844937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.110 [2024-07-15 11:47:31.852751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.110 [2024-07-15 11:47:31.852769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.110 [2024-07-15 11:47:31.852800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.852808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.110 [2024-07-15 11:47:31.852831] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.110 [2024-07-15 11:47:31.852843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:24.110 [2024-07-15 11:47:31.852854] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:24.110 [2024-07-15 11:47:31.852877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.852885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.852892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.110 [2024-07-15 11:47:31.852907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.110 [2024-07-15 11:47:31.852932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.110 [2024-07-15 11:47:31.853068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.110 [2024-07-15 11:47:31.853083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.110 [2024-07-15 11:47:31.853089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.110 [2024-07-15 11:47:31.853121] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:24.110 [2024-07-15 11:47:31.853134] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:24.110 [2024-07-15 11:47:31.853146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.110 [2024-07-15 11:47:31.853169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.110 [2024-07-15 11:47:31.853189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.110 [2024-07-15 11:47:31.853276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.110 [2024-07-15 11:47:31.853290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.110 [2024-07-15 11:47:31.853296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.110 [2024-07-15 11:47:31.853311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:24.110 [2024-07-15 11:47:31.853324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.110 [2024-07-15 11:47:31.853335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.110 [2024-07-15 11:47:31.853358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.110 [2024-07-15 11:47:31.853377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.110 [2024-07-15 11:47:31.853457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.110 [2024-07-15 11:47:31.853471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.110 [2024-07-15 11:47:31.853477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.110 [2024-07-15 11:47:31.853491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.110 [2024-07-15 11:47:31.853507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.110 [2024-07-15 11:47:31.853531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.110 [2024-07-15 11:47:31.853551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.110 [2024-07-15 11:47:31.853635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.110 [2024-07-15 11:47:31.853648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.110 [2024-07-15 11:47:31.853655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.110 [2024-07-15 11:47:31.853661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.110 [2024-07-15 11:47:31.853669] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:24.110 [2024-07-15 11:47:31.853677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:24.111 [2024-07-15 11:47:31.853689] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.111 [2024-07-15 11:47:31.853799] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:24.111 [2024-07-15 11:47:31.853810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.111 [2024-07-15 11:47:31.853824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.853832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.853838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.853848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.111 [2024-07-15 11:47:31.853869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.111 [2024-07-15 11:47:31.853976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.853987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.853994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.111 [2024-07-15 11:47:31.854008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.111 [2024-07-15 11:47:31.854029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.854067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.111 [2024-07-15 11:47:31.854095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.111 [2024-07-15 11:47:31.854181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.854195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.854201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.111 [2024-07-15 11:47:31.854215] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.111 [2024-07-15 11:47:31.854223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.854236] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:24.111 [2024-07-15 11:47:31.854249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.854268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.854286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.111 [2024-07-15 11:47:31.854306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.111 [2024-07-15 11:47:31.854436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.111 [2024-07-15 11:47:31.854450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.111 [2024-07-15 11:47:31.854456] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854463] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x101b540): datao=0, datal=4096, cccid=0 00:21:24.111 [2024-07-15 11:47:31.854470] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107b3c0) on tqpair(0x101b540): expected_datao=0, payload_size=4096 00:21:24.111 [2024-07-15 11:47:31.854477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854488] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.854507] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.895888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.895907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.895914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.895921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.111 [2024-07-15 11:47:31.895933] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:24.111 [2024-07-15 11:47:31.895947] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:24.111 [2024-07-15 11:47:31.895955] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:24.111 [2024-07-15 11:47:31.895964] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:24.111 [2024-07-15 11:47:31.895972] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:24.111 [2024-07-15 11:47:31.895980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.895995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.896007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.111 [2024-07-15 11:47:31.896068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.111 [2024-07-15 11:47:31.896162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.896173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.896180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.111 [2024-07-15 11:47:31.896198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.111 [2024-07-15 11:47:31.896234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.111 [2024-07-15 11:47:31.896263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.111 [2024-07-15 11:47:31.896293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.111 [2024-07-15 11:47:31.896321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.896340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.111 [2024-07-15 11:47:31.896351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.111 [2024-07-15 11:47:31.896389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b3c0, cid 0, qid 0 00:21:24.111 [2024-07-15 11:47:31.896399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b540, cid 1, qid 0 00:21:24.111 [2024-07-15 11:47:31.896407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b6c0, cid 2, qid 0 00:21:24.111 [2024-07-15 11:47:31.896414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.111 [2024-07-15 11:47:31.896421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b9c0, cid 4, qid 0 00:21:24.111 [2024-07-15 11:47:31.896573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.896587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.896593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b9c0) on tqpair=0x101b540 00:21:24.111 [2024-07-15 11:47:31.896612] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:24.111 [2024-07-15 11:47:31.896621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:24.111 [2024-07-15 11:47:31.896638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.896646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x101b540) 00:21:24.111 [2024-07-15 11:47:31.896660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.111 [2024-07-15 11:47:31.896680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b9c0, cid 4, qid 0 00:21:24.111 [2024-07-15 11:47:31.900750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.111 [2024-07-15 11:47:31.900766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.111 [2024-07-15 11:47:31.900772] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.900779] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x101b540): datao=0, datal=4096, cccid=4 00:21:24.111 [2024-07-15 11:47:31.900786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107b9c0) on tqpair(0x101b540): expected_datao=0, payload_size=4096 00:21:24.111 [2024-07-15 11:47:31.900793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.900803] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.900810] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.111 [2024-07-15 11:47:31.900818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.111 [2024-07-15 11:47:31.900826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.111 [2024-07-15 11:47:31.900833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.900839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b9c0) on tqpair=0x101b540 00:21:24.112 [2024-07-15 11:47:31.900859] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:24.112 [2024-07-15 11:47:31.900901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.900912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x101b540) 00:21:24.112 [2024-07-15 11:47:31.900922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.112 [2024-07-15 11:47:31.900934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.900941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.900947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x101b540) 00:21:24.112 [2024-07-15 11:47:31.900955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.112 [2024-07-15 11:47:31.900982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b9c0, cid 4, qid 0 00:21:24.112 [2024-07-15 11:47:31.900994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107bb40, cid 5, qid 0 00:21:24.112 [2024-07-15 11:47:31.901204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.112 [2024-07-15 11:47:31.901216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.112 [2024-07-15 11:47:31.901222] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.901228] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x101b540): datao=0, datal=1024, cccid=4 00:21:24.112 [2024-07-15 11:47:31.901235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107b9c0) on tqpair(0x101b540): expected_datao=0, payload_size=1024 00:21:24.112 [2024-07-15 11:47:31.901242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.901250] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.901257] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.901265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.112 [2024-07-15 11:47:31.901273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.112 [2024-07-15 11:47:31.901279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.901285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107bb40) on tqpair=0x101b540 00:21:24.112 [2024-07-15 11:47:31.943748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.112 [2024-07-15 11:47:31.943766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.112 [2024-07-15 11:47:31.943773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.943779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b9c0) on tqpair=0x101b540 00:21:24.112 [2024-07-15 11:47:31.943799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.943808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x101b540) 00:21:24.112 [2024-07-15 11:47:31.943818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.112 [2024-07-15 11:47:31.943849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b9c0, cid 4, qid 0 00:21:24.112 [2024-07-15 11:47:31.944052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.112 [2024-07-15 11:47:31.944066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.112 [2024-07-15 11:47:31.944073] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.944079] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x101b540): datao=0, datal=3072, cccid=4 00:21:24.112 [2024-07-15 11:47:31.944086] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107b9c0) on tqpair(0x101b540): expected_datao=0, payload_size=3072 00:21:24.112 [2024-07-15 11:47:31.944093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.944113] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.944121] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.986757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.112 [2024-07-15 11:47:31.986775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.112 [2024-07-15 11:47:31.986782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.986789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b9c0) on tqpair=0x101b540 00:21:24.112 [2024-07-15 11:47:31.986805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.986813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x101b540) 00:21:24.112 [2024-07-15 11:47:31.986824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.112 [2024-07-15 11:47:31.986853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b9c0, cid 4, qid 0 00:21:24.112 [2024-07-15 11:47:31.986954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.112 [2024-07-15 11:47:31.986968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.112 [2024-07-15 11:47:31.986975] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.986981] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x101b540): datao=0, datal=8, cccid=4 00:21:24.112 [2024-07-15 11:47:31.986988] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x107b9c0) on tqpair(0x101b540): expected_datao=0, payload_size=8 00:21:24.112 [2024-07-15 11:47:31.986995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.987005] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:31.987011] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:32.031753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.112 [2024-07-15 11:47:32.031770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.112 [2024-07-15 11:47:32.031777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.112 [2024-07-15 11:47:32.031784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b9c0) on tqpair=0x101b540 00:21:24.112 ===================================================== 00:21:24.112 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:24.112 ===================================================== 00:21:24.112 Controller Capabilities/Features 00:21:24.112 ================================ 00:21:24.112 Vendor ID: 0000 00:21:24.112 Subsystem Vendor ID: 0000 00:21:24.112 Serial Number: .................... 00:21:24.112 Model Number: ........................................ 00:21:24.112 Firmware Version: 24.09 00:21:24.112 Recommended Arb Burst: 0 00:21:24.112 IEEE OUI Identifier: 00 00 00 00:21:24.112 Multi-path I/O 00:21:24.112 May have multiple subsystem ports: No 00:21:24.112 May have multiple controllers: No 00:21:24.112 Associated with SR-IOV VF: No 00:21:24.112 Max Data Transfer Size: 131072 00:21:24.112 Max Number of Namespaces: 0 00:21:24.112 Max Number of I/O Queues: 1024 00:21:24.112 NVMe Specification Version (VS): 1.3 00:21:24.112 NVMe Specification Version (Identify): 1.3 00:21:24.112 Maximum Queue Entries: 128 00:21:24.112 Contiguous Queues Required: Yes 00:21:24.112 Arbitration Mechanisms Supported 00:21:24.112 Weighted Round Robin: Not Supported 00:21:24.112 Vendor Specific: Not Supported 00:21:24.112 Reset Timeout: 15000 ms 00:21:24.112 Doorbell Stride: 4 bytes 00:21:24.112 NVM Subsystem Reset: Not Supported 00:21:24.112 Command Sets Supported 00:21:24.112 NVM Command Set: Supported 00:21:24.112 Boot Partition: Not Supported 00:21:24.112 Memory Page Size Minimum: 4096 bytes 00:21:24.112 Memory Page Size Maximum: 4096 bytes 00:21:24.112 Persistent Memory Region: Not Supported 00:21:24.112 Optional Asynchronous Events Supported 00:21:24.112 Namespace Attribute Notices: Not Supported 00:21:24.112 Firmware Activation Notices: Not Supported 00:21:24.112 ANA Change Notices: Not Supported 00:21:24.112 PLE Aggregate Log Change Notices: Not Supported 00:21:24.112 LBA Status Info Alert Notices: Not Supported 00:21:24.112 EGE Aggregate Log Change Notices: Not Supported 00:21:24.112 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.112 Zone Descriptor Change Notices: Not Supported 00:21:24.112 Discovery Log Change Notices: Supported 00:21:24.112 Controller Attributes 00:21:24.112 128-bit Host Identifier: Not Supported 00:21:24.112 Non-Operational Permissive Mode: Not Supported 00:21:24.112 NVM Sets: Not Supported 00:21:24.112 Read Recovery Levels: Not Supported 00:21:24.112 Endurance Groups: Not Supported 00:21:24.112 Predictable Latency Mode: Not Supported 00:21:24.112 Traffic Based Keep ALive: Not Supported 00:21:24.112 Namespace Granularity: Not Supported 00:21:24.112 SQ Associations: Not Supported 00:21:24.112 UUID List: Not Supported 00:21:24.112 Multi-Domain Subsystem: Not Supported 00:21:24.112 Fixed Capacity Management: Not Supported 00:21:24.112 Variable Capacity Management: Not Supported 00:21:24.112 Delete Endurance Group: Not Supported 00:21:24.112 Delete NVM Set: Not Supported 00:21:24.112 Extended LBA Formats Supported: Not Supported 00:21:24.112 Flexible Data Placement Supported: Not Supported 00:21:24.112 00:21:24.112 Controller Memory Buffer Support 00:21:24.112 ================================ 00:21:24.112 Supported: No 00:21:24.112 00:21:24.112 Persistent Memory Region Support 00:21:24.112 ================================ 00:21:24.112 Supported: No 00:21:24.112 00:21:24.112 Admin Command Set Attributes 00:21:24.112 ============================ 00:21:24.112 Security Send/Receive: Not Supported 00:21:24.112 Format NVM: Not Supported 00:21:24.112 Firmware Activate/Download: Not Supported 00:21:24.112 Namespace Management: Not Supported 00:21:24.112 Device Self-Test: Not Supported 00:21:24.112 Directives: Not Supported 00:21:24.112 NVMe-MI: Not Supported 00:21:24.112 Virtualization Management: Not Supported 00:21:24.112 Doorbell Buffer Config: Not Supported 00:21:24.112 Get LBA Status Capability: Not Supported 00:21:24.112 Command & Feature Lockdown Capability: Not Supported 00:21:24.112 Abort Command Limit: 1 00:21:24.112 Async Event Request Limit: 4 00:21:24.112 Number of Firmware Slots: N/A 00:21:24.112 Firmware Slot 1 Read-Only: N/A 00:21:24.112 Firmware Activation Without Reset: N/A 00:21:24.112 Multiple Update Detection Support: N/A 00:21:24.112 Firmware Update Granularity: No Information Provided 00:21:24.112 Per-Namespace SMART Log: No 00:21:24.112 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.112 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:24.112 Command Effects Log Page: Not Supported 00:21:24.113 Get Log Page Extended Data: Supported 00:21:24.113 Telemetry Log Pages: Not Supported 00:21:24.113 Persistent Event Log Pages: Not Supported 00:21:24.113 Supported Log Pages Log Page: May Support 00:21:24.113 Commands Supported & Effects Log Page: Not Supported 00:21:24.113 Feature Identifiers & Effects Log Page:May Support 00:21:24.113 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.113 Data Area 4 for Telemetry Log: Not Supported 00:21:24.113 Error Log Page Entries Supported: 128 00:21:24.113 Keep Alive: Not Supported 00:21:24.113 00:21:24.113 NVM Command Set Attributes 00:21:24.113 ========================== 00:21:24.113 Submission Queue Entry Size 00:21:24.113 Max: 1 00:21:24.113 Min: 1 00:21:24.113 Completion Queue Entry Size 00:21:24.113 Max: 1 00:21:24.113 Min: 1 00:21:24.113 Number of Namespaces: 0 00:21:24.113 Compare Command: Not Supported 00:21:24.113 Write Uncorrectable Command: Not Supported 00:21:24.113 Dataset Management Command: Not Supported 00:21:24.113 Write Zeroes Command: Not Supported 00:21:24.113 Set Features Save Field: Not Supported 00:21:24.113 Reservations: Not Supported 00:21:24.113 Timestamp: Not Supported 00:21:24.113 Copy: Not Supported 00:21:24.113 Volatile Write Cache: Not Present 00:21:24.113 Atomic Write Unit (Normal): 1 00:21:24.113 Atomic Write Unit (PFail): 1 00:21:24.113 Atomic Compare & Write Unit: 1 00:21:24.113 Fused Compare & Write: Supported 00:21:24.113 Scatter-Gather List 00:21:24.113 SGL Command Set: Supported 00:21:24.113 SGL Keyed: Supported 00:21:24.113 SGL Bit Bucket Descriptor: Not Supported 00:21:24.113 SGL Metadata Pointer: Not Supported 00:21:24.113 Oversized SGL: Not Supported 00:21:24.113 SGL Metadata Address: Not Supported 00:21:24.113 SGL Offset: Supported 00:21:24.113 Transport SGL Data Block: Not Supported 00:21:24.113 Replay Protected Memory Block: Not Supported 00:21:24.113 00:21:24.113 Firmware Slot Information 00:21:24.113 ========================= 00:21:24.113 Active slot: 0 00:21:24.113 00:21:24.113 00:21:24.113 Error Log 00:21:24.113 ========= 00:21:24.113 00:21:24.113 Active Namespaces 00:21:24.113 ================= 00:21:24.113 Discovery Log Page 00:21:24.113 ================== 00:21:24.113 Generation Counter: 2 00:21:24.113 Number of Records: 2 00:21:24.113 Record Format: 0 00:21:24.113 00:21:24.113 Discovery Log Entry 0 00:21:24.113 ---------------------- 00:21:24.113 Transport Type: 3 (TCP) 00:21:24.113 Address Family: 1 (IPv4) 00:21:24.113 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:24.113 Entry Flags: 00:21:24.113 Duplicate Returned Information: 1 00:21:24.113 Explicit Persistent Connection Support for Discovery: 1 00:21:24.113 Transport Requirements: 00:21:24.113 Secure Channel: Not Required 00:21:24.113 Port ID: 0 (0x0000) 00:21:24.113 Controller ID: 65535 (0xffff) 00:21:24.113 Admin Max SQ Size: 128 00:21:24.113 Transport Service Identifier: 4420 00:21:24.113 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:24.113 Transport Address: 10.0.0.2 00:21:24.113 Discovery Log Entry 1 00:21:24.113 ---------------------- 00:21:24.113 Transport Type: 3 (TCP) 00:21:24.113 Address Family: 1 (IPv4) 00:21:24.113 Subsystem Type: 2 (NVM Subsystem) 00:21:24.113 Entry Flags: 00:21:24.113 Duplicate Returned Information: 0 00:21:24.113 Explicit Persistent Connection Support for Discovery: 0 00:21:24.113 Transport Requirements: 00:21:24.113 Secure Channel: Not Required 00:21:24.113 Port ID: 0 (0x0000) 00:21:24.113 Controller ID: 65535 (0xffff) 00:21:24.113 Admin Max SQ Size: 128 00:21:24.113 Transport Service Identifier: 4420 00:21:24.113 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:24.113 Transport Address: 10.0.0.2 [2024-07-15 11:47:32.031914] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:24.113 [2024-07-15 11:47:32.031945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b3c0) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.031958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.113 [2024-07-15 11:47:32.031966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b540) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.031974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.113 [2024-07-15 11:47:32.031982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b6c0) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.031989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.113 [2024-07-15 11:47:32.031997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.032004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.113 [2024-07-15 11:47:32.032036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.032062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.032087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.032259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.032271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.032277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.032294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.032317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.032353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.032530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.032543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.032550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.032564] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:24.113 [2024-07-15 11:47:32.032572] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:24.113 [2024-07-15 11:47:32.032587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.032611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.032631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.032715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.032752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.032760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.032786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.032812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.032833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.032928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.032943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.032950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.032973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.032988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.032998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.033033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.033129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.033143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.033150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.033156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.113 [2024-07-15 11:47:32.033172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.033180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.113 [2024-07-15 11:47:32.033187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.113 [2024-07-15 11:47:32.033197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.113 [2024-07-15 11:47:32.033216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.113 [2024-07-15 11:47:32.033303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.113 [2024-07-15 11:47:32.033314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.113 [2024-07-15 11:47:32.033320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.033341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.033366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.033385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.033466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.033480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.033487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.033509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.033533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.033552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.033635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.033648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.033654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.033676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.033700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.033735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.033822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.033836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.033843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.033866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.033882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.033892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.033912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.034905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.034919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.034926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.034952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.034968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.114 [2024-07-15 11:47:32.034977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.114 [2024-07-15 11:47:32.034997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.114 [2024-07-15 11:47:32.035099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.114 [2024-07-15 11:47:32.035113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.114 [2024-07-15 11:47:32.035120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.035126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.114 [2024-07-15 11:47:32.035142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.035151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.114 [2024-07-15 11:47:32.035157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.115 [2024-07-15 11:47:32.035167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.115 [2024-07-15 11:47:32.035186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.115 [2024-07-15 11:47:32.035270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.115 [2024-07-15 11:47:32.035281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.115 [2024-07-15 11:47:32.035287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.115 [2024-07-15 11:47:32.035308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.115 [2024-07-15 11:47:32.035332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.115 [2024-07-15 11:47:32.035351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.115 [2024-07-15 11:47:32.035433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.115 [2024-07-15 11:47:32.035447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.115 [2024-07-15 11:47:32.035453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.115 [2024-07-15 11:47:32.035475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.115 [2024-07-15 11:47:32.035499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.115 [2024-07-15 11:47:32.035519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.115 [2024-07-15 11:47:32.035601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.115 [2024-07-15 11:47:32.035614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.115 [2024-07-15 11:47:32.035621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.115 [2024-07-15 11:47:32.035646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.035662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.115 [2024-07-15 11:47:32.035671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.115 [2024-07-15 11:47:32.035690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.115 [2024-07-15 11:47:32.039751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.115 [2024-07-15 11:47:32.039767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.115 [2024-07-15 11:47:32.039774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.039780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.115 [2024-07-15 11:47:32.039798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.039807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.039813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x101b540) 00:21:24.115 [2024-07-15 11:47:32.039824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.115 [2024-07-15 11:47:32.039845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x107b840, cid 3, qid 0 00:21:24.115 [2024-07-15 11:47:32.039972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.115 [2024-07-15 11:47:32.039987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.115 [2024-07-15 11:47:32.039993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.115 [2024-07-15 11:47:32.040000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x107b840) on tqpair=0x101b540 00:21:24.115 [2024-07-15 11:47:32.040013] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:24.115 00:21:24.115 11:47:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:24.115 [2024-07-15 11:47:32.075627] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:24.115 [2024-07-15 11:47:32.075676] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080412 ] 00:21:24.115 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.379 [2024-07-15 11:47:32.110788] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:24.379 [2024-07-15 11:47:32.110842] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.379 [2024-07-15 11:47:32.110852] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.379 [2024-07-15 11:47:32.110867] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.379 [2024-07-15 11:47:32.110877] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.379 [2024-07-15 11:47:32.111162] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:24.379 [2024-07-15 11:47:32.111198] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x127b540 0 00:21:24.379 [2024-07-15 11:47:32.117751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.379 [2024-07-15 11:47:32.117775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.379 [2024-07-15 11:47:32.117784] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.379 [2024-07-15 11:47:32.117789] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.379 [2024-07-15 11:47:32.117829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.117840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.117846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.379 [2024-07-15 11:47:32.117860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.379 [2024-07-15 11:47:32.117896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.379 [2024-07-15 11:47:32.125751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.379 [2024-07-15 11:47:32.125768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.379 [2024-07-15 11:47:32.125775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.125782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.379 [2024-07-15 11:47:32.125801] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.379 [2024-07-15 11:47:32.125811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:24.379 [2024-07-15 11:47:32.125820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:24.379 [2024-07-15 11:47:32.125837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.125846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.125852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.379 [2024-07-15 11:47:32.125863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.379 [2024-07-15 11:47:32.125886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.379 [2024-07-15 11:47:32.126054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.379 [2024-07-15 11:47:32.126066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.379 [2024-07-15 11:47:32.126087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.379 [2024-07-15 11:47:32.126101] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:24.379 [2024-07-15 11:47:32.126114] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:24.379 [2024-07-15 11:47:32.126125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.379 [2024-07-15 11:47:32.126148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.379 [2024-07-15 11:47:32.126168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.379 [2024-07-15 11:47:32.126255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.379 [2024-07-15 11:47:32.126267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.379 [2024-07-15 11:47:32.126273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.379 [2024-07-15 11:47:32.126287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:24.379 [2024-07-15 11:47:32.126304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.379 [2024-07-15 11:47:32.126316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.379 [2024-07-15 11:47:32.126338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.379 [2024-07-15 11:47:32.126358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.379 [2024-07-15 11:47:32.126445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.379 [2024-07-15 11:47:32.126459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.379 [2024-07-15 11:47:32.126465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.379 [2024-07-15 11:47:32.126479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.379 [2024-07-15 11:47:32.126495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.379 [2024-07-15 11:47:32.126519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.379 [2024-07-15 11:47:32.126539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.379 [2024-07-15 11:47:32.126623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.379 [2024-07-15 11:47:32.126634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.379 [2024-07-15 11:47:32.126640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.379 [2024-07-15 11:47:32.126646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.379 [2024-07-15 11:47:32.126653] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.379 [2024-07-15 11:47:32.126661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.380 [2024-07-15 11:47:32.126673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.380 [2024-07-15 11:47:32.126783] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:24.380 [2024-07-15 11:47:32.126792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.380 [2024-07-15 11:47:32.126804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.126811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.126818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.126828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.380 [2024-07-15 11:47:32.126850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.380 [2024-07-15 11:47:32.127002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.380 [2024-07-15 11:47:32.127031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.380 [2024-07-15 11:47:32.127038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.380 [2024-07-15 11:47:32.127057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.380 [2024-07-15 11:47:32.127074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.127113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.380 [2024-07-15 11:47:32.127132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.380 [2024-07-15 11:47:32.127255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.380 [2024-07-15 11:47:32.127269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.380 [2024-07-15 11:47:32.127275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.380 [2024-07-15 11:47:32.127288] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.380 [2024-07-15 11:47:32.127296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.380 [2024-07-15 11:47:32.127309] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:24.380 [2024-07-15 11:47:32.127322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.380 [2024-07-15 11:47:32.127335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.127352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.380 [2024-07-15 11:47:32.127372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.380 [2024-07-15 11:47:32.127494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.380 [2024-07-15 11:47:32.127508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.380 [2024-07-15 11:47:32.127515] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127521] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=4096, cccid=0 00:21:24.380 [2024-07-15 11:47:32.127528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12db3c0) on tqpair(0x127b540): expected_datao=0, payload_size=4096 00:21:24.380 [2024-07-15 11:47:32.127535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127551] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.127560] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.172752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.380 [2024-07-15 11:47:32.172771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.380 [2024-07-15 11:47:32.172778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.172785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.380 [2024-07-15 11:47:32.172796] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:24.380 [2024-07-15 11:47:32.172809] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:24.380 [2024-07-15 11:47:32.172817] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:24.380 [2024-07-15 11:47:32.172827] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:24.380 [2024-07-15 11:47:32.172835] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:24.380 [2024-07-15 11:47:32.172843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:24.380 [2024-07-15 11:47:32.172858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.380 [2024-07-15 11:47:32.172870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.172877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.172883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.172894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.380 [2024-07-15 11:47:32.172918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.380 [2024-07-15 11:47:32.173097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.380 [2024-07-15 11:47:32.173112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.380 [2024-07-15 11:47:32.173118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.380 [2024-07-15 11:47:32.173134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.173157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.380 [2024-07-15 11:47:32.173166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.173186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.380 [2024-07-15 11:47:32.173195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x127b540) 00:21:24.380 [2024-07-15 11:47:32.173215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.380 [2024-07-15 11:47:32.173224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.380 [2024-07-15 11:47:32.173236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.173244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.381 [2024-07-15 11:47:32.173252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.173302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.381 [2024-07-15 11:47:32.173324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db3c0, cid 0, qid 0 00:21:24.381 [2024-07-15 11:47:32.173334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db540, cid 1, qid 0 00:21:24.381 [2024-07-15 11:47:32.173342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db6c0, cid 2, qid 0 00:21:24.381 [2024-07-15 11:47:32.173349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.381 [2024-07-15 11:47:32.173356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.381 [2024-07-15 11:47:32.173536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.381 [2024-07-15 11:47:32.173550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.381 [2024-07-15 11:47:32.173556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.381 [2024-07-15 11:47:32.173570] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:24.381 [2024-07-15 11:47:32.173578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.173635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.381 [2024-07-15 11:47:32.173661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.381 [2024-07-15 11:47:32.173829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.381 [2024-07-15 11:47:32.173844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.381 [2024-07-15 11:47:32.173851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.381 [2024-07-15 11:47:32.173920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.173953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.173961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.173971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.381 [2024-07-15 11:47:32.173992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.381 [2024-07-15 11:47:32.174157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.381 [2024-07-15 11:47:32.174170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.381 [2024-07-15 11:47:32.174176] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.174182] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=4096, cccid=4 00:21:24.381 [2024-07-15 11:47:32.174193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12db9c0) on tqpair(0x127b540): expected_datao=0, payload_size=4096 00:21:24.381 [2024-07-15 11:47:32.174201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.174217] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.174225] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.214898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.381 [2024-07-15 11:47:32.214916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.381 [2024-07-15 11:47:32.214924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.214931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.381 [2024-07-15 11:47:32.214949] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:24.381 [2024-07-15 11:47:32.214971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.214990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.215004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.215012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.215023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.381 [2024-07-15 11:47:32.215059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.381 [2024-07-15 11:47:32.215191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.381 [2024-07-15 11:47:32.215206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.381 [2024-07-15 11:47:32.215213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.215219] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=4096, cccid=4 00:21:24.381 [2024-07-15 11:47:32.215226] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12db9c0) on tqpair(0x127b540): expected_datao=0, payload_size=4096 00:21:24.381 [2024-07-15 11:47:32.215233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.215249] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.215258] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.258747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.381 [2024-07-15 11:47:32.258765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.381 [2024-07-15 11:47:32.258772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.258779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.381 [2024-07-15 11:47:32.258803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.258824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:24.381 [2024-07-15 11:47:32.258838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.381 [2024-07-15 11:47:32.258846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.381 [2024-07-15 11:47:32.258857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.381 [2024-07-15 11:47:32.258880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.382 [2024-07-15 11:47:32.259061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.382 [2024-07-15 11:47:32.259076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.382 [2024-07-15 11:47:32.259083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259089] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=4096, cccid=4 00:21:24.382 [2024-07-15 11:47:32.259096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12db9c0) on tqpair(0x127b540): expected_datao=0, payload_size=4096 00:21:24.382 [2024-07-15 11:47:32.259103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259112] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259119] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.259189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.259196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.259216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259290] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:24.382 [2024-07-15 11:47:32.259308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:24.382 [2024-07-15 11:47:32.259316] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:24.382 [2024-07-15 11:47:32.259336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.259354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.259364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.259385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.382 [2024-07-15 11:47:32.259409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.382 [2024-07-15 11:47:32.259420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbb40, cid 5, qid 0 00:21:24.382 [2024-07-15 11:47:32.259585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.259599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.259605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.259625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.259634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.259640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbb40) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.259661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.259679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.259708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbb40, cid 5, qid 0 00:21:24.382 [2024-07-15 11:47:32.259930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.259944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.259951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbb40) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.259973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.259981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.259991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.260011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbb40, cid 5, qid 0 00:21:24.382 [2024-07-15 11:47:32.260145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.260157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.260164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbb40) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.260185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.260202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.260221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbb40, cid 5, qid 0 00:21:24.382 [2024-07-15 11:47:32.260314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.382 [2024-07-15 11:47:32.260326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.382 [2024-07-15 11:47:32.260332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbb40) on tqpair=0x127b540 00:21:24.382 [2024-07-15 11:47:32.260359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.260379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.260390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.260406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.382 [2024-07-15 11:47:32.260421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.382 [2024-07-15 11:47:32.260428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x127b540) 00:21:24.382 [2024-07-15 11:47:32.260437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.383 [2024-07-15 11:47:32.260448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x127b540) 00:21:24.383 [2024-07-15 11:47:32.260463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.383 [2024-07-15 11:47:32.260484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbb40, cid 5, qid 0 00:21:24.383 [2024-07-15 11:47:32.260494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db9c0, cid 4, qid 0 00:21:24.383 [2024-07-15 11:47:32.260501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbcc0, cid 6, qid 0 00:21:24.383 [2024-07-15 11:47:32.260508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 7, qid 0 00:21:24.383 [2024-07-15 11:47:32.260752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.383 [2024-07-15 11:47:32.260767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.383 [2024-07-15 11:47:32.260774] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260780] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=8192, cccid=5 00:21:24.383 [2024-07-15 11:47:32.260803] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dbb40) on tqpair(0x127b540): expected_datao=0, payload_size=8192 00:21:24.383 [2024-07-15 11:47:32.260810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260840] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260850] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.383 [2024-07-15 11:47:32.260868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.383 [2024-07-15 11:47:32.260874] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260881] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=512, cccid=4 00:21:24.383 [2024-07-15 11:47:32.260888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12db9c0) on tqpair(0x127b540): expected_datao=0, payload_size=512 00:21:24.383 [2024-07-15 11:47:32.260896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.383 [2024-07-15 11:47:32.260929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.383 [2024-07-15 11:47:32.260935] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260941] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=512, cccid=6 00:21:24.383 [2024-07-15 11:47:32.260949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dbcc0) on tqpair(0x127b540): expected_datao=0, payload_size=512 00:21:24.383 [2024-07-15 11:47:32.260956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260965] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260972] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.260981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.383 [2024-07-15 11:47:32.260993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.383 [2024-07-15 11:47:32.261000] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261006] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127b540): datao=0, datal=4096, cccid=7 00:21:24.383 [2024-07-15 11:47:32.261014] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dbe40) on tqpair(0x127b540): expected_datao=0, payload_size=4096 00:21:24.383 [2024-07-15 11:47:32.261021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261046] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261053] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.383 [2024-07-15 11:47:32.261074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.383 [2024-07-15 11:47:32.261080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbb40) on tqpair=0x127b540 00:21:24.383 [2024-07-15 11:47:32.261119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.383 [2024-07-15 11:47:32.261130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.383 [2024-07-15 11:47:32.261136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db9c0) on tqpair=0x127b540 00:21:24.383 [2024-07-15 11:47:32.261157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.383 [2024-07-15 11:47:32.261167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.383 [2024-07-15 11:47:32.261173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbcc0) on tqpair=0x127b540 00:21:24.383 [2024-07-15 11:47:32.261189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.383 [2024-07-15 11:47:32.261198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.383 [2024-07-15 11:47:32.261204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.383 [2024-07-15 11:47:32.261210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x127b540 00:21:24.383 ===================================================== 00:21:24.383 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.383 ===================================================== 00:21:24.383 Controller Capabilities/Features 00:21:24.383 ================================ 00:21:24.383 Vendor ID: 8086 00:21:24.383 Subsystem Vendor ID: 8086 00:21:24.383 Serial Number: SPDK00000000000001 00:21:24.383 Model Number: SPDK bdev Controller 00:21:24.383 Firmware Version: 24.09 00:21:24.383 Recommended Arb Burst: 6 00:21:24.383 IEEE OUI Identifier: e4 d2 5c 00:21:24.383 Multi-path I/O 00:21:24.383 May have multiple subsystem ports: Yes 00:21:24.383 May have multiple controllers: Yes 00:21:24.383 Associated with SR-IOV VF: No 00:21:24.383 Max Data Transfer Size: 131072 00:21:24.383 Max Number of Namespaces: 32 00:21:24.383 Max Number of I/O Queues: 127 00:21:24.383 NVMe Specification Version (VS): 1.3 00:21:24.383 NVMe Specification Version (Identify): 1.3 00:21:24.383 Maximum Queue Entries: 128 00:21:24.383 Contiguous Queues Required: Yes 00:21:24.383 Arbitration Mechanisms Supported 00:21:24.383 Weighted Round Robin: Not Supported 00:21:24.383 Vendor Specific: Not Supported 00:21:24.383 Reset Timeout: 15000 ms 00:21:24.383 Doorbell Stride: 4 bytes 00:21:24.383 NVM Subsystem Reset: Not Supported 00:21:24.383 Command Sets Supported 00:21:24.383 NVM Command Set: Supported 00:21:24.383 Boot Partition: Not Supported 00:21:24.383 Memory Page Size Minimum: 4096 bytes 00:21:24.383 Memory Page Size Maximum: 4096 bytes 00:21:24.383 Persistent Memory Region: Not Supported 00:21:24.383 Optional Asynchronous Events Supported 00:21:24.384 Namespace Attribute Notices: Supported 00:21:24.384 Firmware Activation Notices: Not Supported 00:21:24.384 ANA Change Notices: Not Supported 00:21:24.384 PLE Aggregate Log Change Notices: Not Supported 00:21:24.384 LBA Status Info Alert Notices: Not Supported 00:21:24.384 EGE Aggregate Log Change Notices: Not Supported 00:21:24.384 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.384 Zone Descriptor Change Notices: Not Supported 00:21:24.384 Discovery Log Change Notices: Not Supported 00:21:24.384 Controller Attributes 00:21:24.384 128-bit Host Identifier: Supported 00:21:24.384 Non-Operational Permissive Mode: Not Supported 00:21:24.384 NVM Sets: Not Supported 00:21:24.384 Read Recovery Levels: Not Supported 00:21:24.384 Endurance Groups: Not Supported 00:21:24.384 Predictable Latency Mode: Not Supported 00:21:24.384 Traffic Based Keep ALive: Not Supported 00:21:24.384 Namespace Granularity: Not Supported 00:21:24.384 SQ Associations: Not Supported 00:21:24.384 UUID List: Not Supported 00:21:24.384 Multi-Domain Subsystem: Not Supported 00:21:24.384 Fixed Capacity Management: Not Supported 00:21:24.384 Variable Capacity Management: Not Supported 00:21:24.384 Delete Endurance Group: Not Supported 00:21:24.384 Delete NVM Set: Not Supported 00:21:24.384 Extended LBA Formats Supported: Not Supported 00:21:24.384 Flexible Data Placement Supported: Not Supported 00:21:24.384 00:21:24.384 Controller Memory Buffer Support 00:21:24.384 ================================ 00:21:24.384 Supported: No 00:21:24.384 00:21:24.384 Persistent Memory Region Support 00:21:24.384 ================================ 00:21:24.384 Supported: No 00:21:24.384 00:21:24.384 Admin Command Set Attributes 00:21:24.384 ============================ 00:21:24.384 Security Send/Receive: Not Supported 00:21:24.384 Format NVM: Not Supported 00:21:24.384 Firmware Activate/Download: Not Supported 00:21:24.384 Namespace Management: Not Supported 00:21:24.384 Device Self-Test: Not Supported 00:21:24.384 Directives: Not Supported 00:21:24.384 NVMe-MI: Not Supported 00:21:24.384 Virtualization Management: Not Supported 00:21:24.384 Doorbell Buffer Config: Not Supported 00:21:24.384 Get LBA Status Capability: Not Supported 00:21:24.384 Command & Feature Lockdown Capability: Not Supported 00:21:24.384 Abort Command Limit: 4 00:21:24.384 Async Event Request Limit: 4 00:21:24.384 Number of Firmware Slots: N/A 00:21:24.384 Firmware Slot 1 Read-Only: N/A 00:21:24.384 Firmware Activation Without Reset: N/A 00:21:24.384 Multiple Update Detection Support: N/A 00:21:24.384 Firmware Update Granularity: No Information Provided 00:21:24.384 Per-Namespace SMART Log: No 00:21:24.384 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.384 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:24.384 Command Effects Log Page: Supported 00:21:24.384 Get Log Page Extended Data: Supported 00:21:24.384 Telemetry Log Pages: Not Supported 00:21:24.384 Persistent Event Log Pages: Not Supported 00:21:24.384 Supported Log Pages Log Page: May Support 00:21:24.384 Commands Supported & Effects Log Page: Not Supported 00:21:24.384 Feature Identifiers & Effects Log Page:May Support 00:21:24.384 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.384 Data Area 4 for Telemetry Log: Not Supported 00:21:24.384 Error Log Page Entries Supported: 128 00:21:24.384 Keep Alive: Supported 00:21:24.384 Keep Alive Granularity: 10000 ms 00:21:24.384 00:21:24.384 NVM Command Set Attributes 00:21:24.384 ========================== 00:21:24.384 Submission Queue Entry Size 00:21:24.384 Max: 64 00:21:24.384 Min: 64 00:21:24.384 Completion Queue Entry Size 00:21:24.384 Max: 16 00:21:24.384 Min: 16 00:21:24.384 Number of Namespaces: 32 00:21:24.384 Compare Command: Supported 00:21:24.384 Write Uncorrectable Command: Not Supported 00:21:24.384 Dataset Management Command: Supported 00:21:24.384 Write Zeroes Command: Supported 00:21:24.384 Set Features Save Field: Not Supported 00:21:24.384 Reservations: Supported 00:21:24.384 Timestamp: Not Supported 00:21:24.384 Copy: Supported 00:21:24.384 Volatile Write Cache: Present 00:21:24.384 Atomic Write Unit (Normal): 1 00:21:24.384 Atomic Write Unit (PFail): 1 00:21:24.384 Atomic Compare & Write Unit: 1 00:21:24.384 Fused Compare & Write: Supported 00:21:24.384 Scatter-Gather List 00:21:24.384 SGL Command Set: Supported 00:21:24.384 SGL Keyed: Supported 00:21:24.384 SGL Bit Bucket Descriptor: Not Supported 00:21:24.384 SGL Metadata Pointer: Not Supported 00:21:24.384 Oversized SGL: Not Supported 00:21:24.384 SGL Metadata Address: Not Supported 00:21:24.384 SGL Offset: Supported 00:21:24.384 Transport SGL Data Block: Not Supported 00:21:24.384 Replay Protected Memory Block: Not Supported 00:21:24.384 00:21:24.384 Firmware Slot Information 00:21:24.384 ========================= 00:21:24.384 Active slot: 1 00:21:24.384 Slot 1 Firmware Revision: 24.09 00:21:24.384 00:21:24.384 00:21:24.384 Commands Supported and Effects 00:21:24.384 ============================== 00:21:24.384 Admin Commands 00:21:24.384 -------------- 00:21:24.384 Get Log Page (02h): Supported 00:21:24.384 Identify (06h): Supported 00:21:24.384 Abort (08h): Supported 00:21:24.384 Set Features (09h): Supported 00:21:24.384 Get Features (0Ah): Supported 00:21:24.384 Asynchronous Event Request (0Ch): Supported 00:21:24.384 Keep Alive (18h): Supported 00:21:24.384 I/O Commands 00:21:24.384 ------------ 00:21:24.384 Flush (00h): Supported LBA-Change 00:21:24.384 Write (01h): Supported LBA-Change 00:21:24.384 Read (02h): Supported 00:21:24.384 Compare (05h): Supported 00:21:24.384 Write Zeroes (08h): Supported LBA-Change 00:21:24.384 Dataset Management (09h): Supported LBA-Change 00:21:24.385 Copy (19h): Supported LBA-Change 00:21:24.385 00:21:24.385 Error Log 00:21:24.385 ========= 00:21:24.385 00:21:24.385 Arbitration 00:21:24.385 =========== 00:21:24.385 Arbitration Burst: 1 00:21:24.385 00:21:24.385 Power Management 00:21:24.385 ================ 00:21:24.385 Number of Power States: 1 00:21:24.385 Current Power State: Power State #0 00:21:24.385 Power State #0: 00:21:24.385 Max Power: 0.00 W 00:21:24.385 Non-Operational State: Operational 00:21:24.385 Entry Latency: Not Reported 00:21:24.385 Exit Latency: Not Reported 00:21:24.385 Relative Read Throughput: 0 00:21:24.385 Relative Read Latency: 0 00:21:24.385 Relative Write Throughput: 0 00:21:24.385 Relative Write Latency: 0 00:21:24.385 Idle Power: Not Reported 00:21:24.385 Active Power: Not Reported 00:21:24.385 Non-Operational Permissive Mode: Not Supported 00:21:24.385 00:21:24.385 Health Information 00:21:24.385 ================== 00:21:24.385 Critical Warnings: 00:21:24.385 Available Spare Space: OK 00:21:24.385 Temperature: OK 00:21:24.385 Device Reliability: OK 00:21:24.385 Read Only: No 00:21:24.385 Volatile Memory Backup: OK 00:21:24.385 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:24.385 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:24.385 Available Spare: 0% 00:21:24.385 Available Spare Threshold: 0% 00:21:24.385 Life Percentage Used:[2024-07-15 11:47:32.261324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x127b540) 00:21:24.385 [2024-07-15 11:47:32.261345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.385 [2024-07-15 11:47:32.261366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 7, qid 0 00:21:24.385 [2024-07-15 11:47:32.261525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.385 [2024-07-15 11:47:32.261539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.385 [2024-07-15 11:47:32.261545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261608] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:24.385 [2024-07-15 11:47:32.261626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db3c0) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.385 [2024-07-15 11:47:32.261644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db540) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.385 [2024-07-15 11:47:32.261659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db6c0) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.385 [2024-07-15 11:47:32.261678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.385 [2024-07-15 11:47:32.261697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.385 [2024-07-15 11:47:32.261735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.385 [2024-07-15 11:47:32.261768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.385 [2024-07-15 11:47:32.261928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.385 [2024-07-15 11:47:32.261943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.385 [2024-07-15 11:47:32.261949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.385 [2024-07-15 11:47:32.261967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.385 [2024-07-15 11:47:32.261983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.385 [2024-07-15 11:47:32.261994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.385 [2024-07-15 11:47:32.262035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.385 [2024-07-15 11:47:32.262129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.385 [2024-07-15 11:47:32.262143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.385 [2024-07-15 11:47:32.262149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.386 [2024-07-15 11:47:32.262163] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:24.386 [2024-07-15 11:47:32.262170] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:24.386 [2024-07-15 11:47:32.262186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.386 [2024-07-15 11:47:32.262209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.386 [2024-07-15 11:47:32.262228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.386 [2024-07-15 11:47:32.262314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.386 [2024-07-15 11:47:32.262328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.386 [2024-07-15 11:47:32.262334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.386 [2024-07-15 11:47:32.262356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.386 [2024-07-15 11:47:32.262384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.386 [2024-07-15 11:47:32.262404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.386 [2024-07-15 11:47:32.262485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.386 [2024-07-15 11:47:32.262499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.386 [2024-07-15 11:47:32.262505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.386 [2024-07-15 11:47:32.262527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.386 [2024-07-15 11:47:32.262551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.386 [2024-07-15 11:47:32.262571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.386 [2024-07-15 11:47:32.262664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.386 [2024-07-15 11:47:32.262675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.386 [2024-07-15 11:47:32.262682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.386 [2024-07-15 11:47:32.262703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.262717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127b540) 00:21:24.386 [2024-07-15 11:47:32.266752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.386 [2024-07-15 11:47:32.266780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12db840, cid 3, qid 0 00:21:24.386 [2024-07-15 11:47:32.266941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.386 [2024-07-15 11:47:32.266956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.386 [2024-07-15 11:47:32.266962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.386 [2024-07-15 11:47:32.266969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12db840) on tqpair=0x127b540 00:21:24.386 [2024-07-15 11:47:32.266982] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:24.386 0% 00:21:24.386 Data Units Read: 0 00:21:24.386 Data Units Written: 0 00:21:24.386 Host Read Commands: 0 00:21:24.386 Host Write Commands: 0 00:21:24.386 Controller Busy Time: 0 minutes 00:21:24.386 Power Cycles: 0 00:21:24.386 Power On Hours: 0 hours 00:21:24.386 Unsafe Shutdowns: 0 00:21:24.386 Unrecoverable Media Errors: 0 00:21:24.386 Lifetime Error Log Entries: 0 00:21:24.386 Warning Temperature Time: 0 minutes 00:21:24.386 Critical Temperature Time: 0 minutes 00:21:24.386 00:21:24.386 Number of Queues 00:21:24.386 ================ 00:21:24.386 Number of I/O Submission Queues: 127 00:21:24.386 Number of I/O Completion Queues: 127 00:21:24.386 00:21:24.386 Active Namespaces 00:21:24.386 ================= 00:21:24.386 Namespace ID:1 00:21:24.386 Error Recovery Timeout: Unlimited 00:21:24.386 Command Set Identifier: NVM (00h) 00:21:24.386 Deallocate: Supported 00:21:24.386 Deallocated/Unwritten Error: Not Supported 00:21:24.386 Deallocated Read Value: Unknown 00:21:24.386 Deallocate in Write Zeroes: Not Supported 00:21:24.386 Deallocated Guard Field: 0xFFFF 00:21:24.386 Flush: Supported 00:21:24.386 Reservation: Supported 00:21:24.386 Namespace Sharing Capabilities: Multiple Controllers 00:21:24.386 Size (in LBAs): 131072 (0GiB) 00:21:24.386 Capacity (in LBAs): 131072 (0GiB) 00:21:24.386 Utilization (in LBAs): 131072 (0GiB) 00:21:24.386 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:24.386 EUI64: ABCDEF0123456789 00:21:24.386 UUID: cb726df3-13cd-4c75-b843-396da3b1fa97 00:21:24.386 Thin Provisioning: Not Supported 00:21:24.386 Per-NS Atomic Units: Yes 00:21:24.386 Atomic Boundary Size (Normal): 0 00:21:24.386 Atomic Boundary Size (PFail): 0 00:21:24.386 Atomic Boundary Offset: 0 00:21:24.386 Maximum Single Source Range Length: 65535 00:21:24.386 Maximum Copy Length: 65535 00:21:24.386 Maximum Source Range Count: 1 00:21:24.386 NGUID/EUI64 Never Reused: No 00:21:24.386 Namespace Write Protected: No 00:21:24.386 Number of LBA Formats: 1 00:21:24.386 Current LBA Format: LBA Format #00 00:21:24.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:24.386 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.386 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.387 rmmod nvme_tcp 00:21:24.387 rmmod nvme_fabrics 00:21:24.387 rmmod nvme_keyring 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3080260 ']' 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3080260 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3080260 ']' 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3080260 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.387 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080260 00:21:24.644 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.644 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.644 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080260' 00:21:24.644 killing process with pid 3080260 00:21:24.644 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3080260 00:21:24.644 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3080260 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.905 11:47:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.812 11:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.812 00:21:26.812 real 0m5.662s 00:21:26.812 user 0m5.038s 00:21:26.812 sys 0m1.961s 00:21:26.812 11:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.812 11:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.812 ************************************ 00:21:26.812 END TEST nvmf_identify 00:21:26.812 ************************************ 00:21:26.812 11:47:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:26.812 11:47:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:26.812 11:47:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:26.812 11:47:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.812 11:47:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.812 ************************************ 00:21:26.812 START TEST nvmf_perf 00:21:26.812 ************************************ 00:21:26.812 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:27.070 * Looking for test storage... 00:21:27.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.070 11:47:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.071 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:28.978 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:28.978 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:28.978 Found net devices under 0000:84:00.0: cvl_0_0 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:28.978 Found net devices under 0000:84:00.1: cvl_0_1 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.978 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.979 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.237 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:21:29.237 00:21:29.237 --- 10.0.0.2 ping statistics --- 00:21:29.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.237 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:21:29.237 00:21:29.237 --- 10.0.0.1 ping statistics --- 00:21:29.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.237 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3082359 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3082359 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3082359 ']' 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.237 11:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:29.237 [2024-07-15 11:47:37.145609] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:29.237 [2024-07-15 11:47:37.145705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.237 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.237 [2024-07-15 11:47:37.213166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.500 [2024-07-15 11:47:37.327099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.501 [2024-07-15 11:47:37.327151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.501 [2024-07-15 11:47:37.327174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.501 [2024-07-15 11:47:37.327185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.501 [2024-07-15 11:47:37.327195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.501 [2024-07-15 11:47:37.327281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.501 [2024-07-15 11:47:37.327346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.501 [2024-07-15 11:47:37.327417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.501 [2024-07-15 11:47:37.327420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:30.130 11:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:33.442 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:33.443 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:33.699 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:21:33.699 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:33.956 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:33.956 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:21:33.956 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:33.956 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:33.956 11:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.213 [2024-07-15 11:47:42.028587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.213 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.470 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:34.470 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.727 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:34.727 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:34.985 11:47:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.243 [2024-07-15 11:47:43.092443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.243 11:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:35.502 11:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:21:35.502 11:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:21:35.502 11:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:35.502 11:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:21:36.879 Initializing NVMe Controllers 00:21:36.879 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:21:36.879 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:21:36.879 Initialization complete. Launching workers. 00:21:36.879 ======================================================== 00:21:36.879 Latency(us) 00:21:36.879 Device Information : IOPS MiB/s Average min max 00:21:36.879 PCIE (0000:82:00.0) NSID 1 from core 0: 84747.41 331.04 377.16 29.47 4375.62 00:21:36.879 ======================================================== 00:21:36.879 Total : 84747.41 331.04 377.16 29.47 4375.62 00:21:36.879 00:21:36.879 11:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.879 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.286 Initializing NVMe Controllers 00:21:38.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:38.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:38.286 Initialization complete. Launching workers. 00:21:38.286 ======================================================== 00:21:38.286 Latency(us) 00:21:38.286 Device Information : IOPS MiB/s Average min max 00:21:38.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10836.16 145.88 45250.76 00:21:38.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13229.93 7937.41 47900.61 00:21:38.286 ======================================================== 00:21:38.286 Total : 171.00 0.67 11900.06 145.88 47900.61 00:21:38.286 00:21:38.286 11:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.221 Initializing NVMe Controllers 00:21:39.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:39.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:39.221 Initialization complete. Launching workers. 00:21:39.221 ======================================================== 00:21:39.221 Latency(us) 00:21:39.221 Device Information : IOPS MiB/s Average min max 00:21:39.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8576.60 33.50 3732.13 724.77 9818.49 00:21:39.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.11 15.14 8270.95 4278.54 17062.79 00:21:39.221 ======================================================== 00:21:39.221 Total : 12451.71 48.64 5144.66 724.77 17062.79 00:21:39.221 00:21:39.480 11:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:39.480 11:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:39.480 11:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:39.480 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.011 Initializing NVMe Controllers 00:21:42.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.011 Controller IO queue size 128, less than required. 00:21:42.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.011 Controller IO queue size 128, less than required. 00:21:42.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:42.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:42.011 Initialization complete. Launching workers. 00:21:42.011 ======================================================== 00:21:42.011 Latency(us) 00:21:42.011 Device Information : IOPS MiB/s Average min max 00:21:42.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1453.43 363.36 90419.99 45989.99 135295.96 00:21:42.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.97 143.24 229195.08 78766.57 357729.92 00:21:42.011 ======================================================== 00:21:42.011 Total : 2026.40 506.60 129659.13 45989.99 357729.92 00:21:42.011 00:21:42.011 11:47:49 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:42.011 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.011 No valid NVMe controllers or AIO or URING devices found 00:21:42.011 Initializing NVMe Controllers 00:21:42.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.011 Controller IO queue size 128, less than required. 00:21:42.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:42.011 Controller IO queue size 128, less than required. 00:21:42.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:42.011 WARNING: Some requested NVMe devices were skipped 00:21:42.011 11:47:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:42.011 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.300 Initializing NVMe Controllers 00:21:45.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.300 Controller IO queue size 128, less than required. 00:21:45.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.300 Controller IO queue size 128, less than required. 00:21:45.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.300 Initialization complete. Launching workers. 00:21:45.300 00:21:45.300 ==================== 00:21:45.300 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:45.300 TCP transport: 00:21:45.300 polls: 8845 00:21:45.300 idle_polls: 5958 00:21:45.300 sock_completions: 2887 00:21:45.300 nvme_completions: 5197 00:21:45.300 submitted_requests: 7824 00:21:45.300 queued_requests: 1 00:21:45.300 00:21:45.300 ==================== 00:21:45.300 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:45.300 TCP transport: 00:21:45.300 polls: 12758 00:21:45.300 idle_polls: 9075 00:21:45.300 sock_completions: 3683 00:21:45.300 nvme_completions: 5293 00:21:45.300 submitted_requests: 7868 00:21:45.300 queued_requests: 1 00:21:45.300 ======================================================== 00:21:45.300 Latency(us) 00:21:45.300 Device Information : IOPS MiB/s Average min max 00:21:45.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1298.79 324.70 102566.28 61026.66 159732.71 00:21:45.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1322.79 330.70 99132.11 48751.04 166384.09 00:21:45.300 ======================================================== 00:21:45.300 Total : 2621.58 655.40 100833.47 48751.04 166384.09 00:21:45.300 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.300 rmmod nvme_tcp 00:21:45.300 rmmod nvme_fabrics 00:21:45.300 rmmod nvme_keyring 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3082359 ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3082359 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3082359 ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3082359 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082359 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082359' 00:21:45.300 killing process with pid 3082359 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3082359 00:21:45.300 11:47:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3082359 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.677 11:47:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.220 11:47:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.220 00:21:49.220 real 0m21.883s 00:21:49.220 user 1m8.184s 00:21:49.220 sys 0m5.631s 00:21:49.220 11:47:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.220 11:47:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.220 ************************************ 00:21:49.220 END TEST nvmf_perf 00:21:49.220 ************************************ 00:21:49.220 11:47:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:49.220 11:47:56 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.220 11:47:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:49.220 11:47:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.220 11:47:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.220 ************************************ 00:21:49.220 START TEST nvmf_fio_host 00:21:49.220 ************************************ 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.220 * Looking for test storage... 00:21:49.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.220 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.221 11:47:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:51.127 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:51.127 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:51.127 Found net devices under 0000:84:00.0: cvl_0_0 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:51.127 Found net devices under 0000:84:00.1: cvl_0_1 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.127 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:51.128 00:21:51.128 --- 10.0.0.2 ping statistics --- 00:21:51.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.128 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:21:51.128 00:21:51.128 --- 10.0.0.1 ping statistics --- 00:21:51.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.128 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3086454 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3086454 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3086454 ']' 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.128 11:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.128 [2024-07-15 11:47:58.962122] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:51.128 [2024-07-15 11:47:58.962219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.128 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.128 [2024-07-15 11:47:59.025922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.385 [2024-07-15 11:47:59.128970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.385 [2024-07-15 11:47:59.129030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.385 [2024-07-15 11:47:59.129056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.385 [2024-07-15 11:47:59.129068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.385 [2024-07-15 11:47:59.129078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.385 [2024-07-15 11:47:59.129186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.385 [2024-07-15 11:47:59.129256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.385 [2024-07-15 11:47:59.129344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.385 [2024-07-15 11:47:59.129346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.385 11:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.385 11:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:51.385 11:47:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:51.643 [2024-07-15 11:47:59.481119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.643 11:47:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:51.643 11:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.643 11:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.643 11:47:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:51.900 Malloc1 00:21:51.900 11:47:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.157 11:48:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.414 11:48:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.671 [2024-07-15 11:48:00.535043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.671 11:48:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:52.928 11:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.184 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:53.184 fio-3.35 00:21:53.184 Starting 1 thread 00:21:53.184 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.712 00:21:55.712 test: (groupid=0, jobs=1): err= 0: pid=3086902: Mon Jul 15 11:48:03 2024 00:21:55.712 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:21:55.712 slat (usec): min=2, max=134, avg= 2.90, stdev= 2.00 00:21:55.712 clat (usec): min=2428, max=12721, avg=7633.04, stdev=611.52 00:21:55.712 lat (usec): min=2448, max=12723, avg=7635.95, stdev=611.42 00:21:55.712 clat percentiles (usec): 00:21:55.712 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:21:55.712 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:21:55.712 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:21:55.712 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[12256], 00:21:55.712 | 99.99th=[12649] 00:21:55.712 bw ( KiB/s): min=35000, max=37600, per=99.88%, avg=36746.00, stdev=1179.95, samples=4 00:21:55.712 iops : min= 8750, max= 9400, avg=9186.50, stdev=294.99, samples=4 00:21:55.712 write: IOPS=9200, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:21:55.712 slat (nsec): min=2250, max=97373, avg=2983.48, stdev=1649.11 00:21:55.712 clat (usec): min=1135, max=11388, avg=6234.84, stdev=517.69 00:21:55.712 lat (usec): min=1142, max=11391, avg=6237.82, stdev=517.64 00:21:55.712 clat percentiles (usec): 00:21:55.712 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:21:55.712 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:21:55.712 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:21:55.712 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9503], 99.95th=[10814], 00:21:55.712 | 99.99th=[11338] 00:21:55.712 bw ( KiB/s): min=35792, max=37344, per=100.00%, avg=36812.00, stdev=693.42, samples=4 00:21:55.713 iops : min= 8948, max= 9336, avg=9203.00, stdev=173.36, samples=4 00:21:55.713 lat (msec) : 2=0.03%, 4=0.11%, 10=99.76%, 20=0.10% 00:21:55.713 cpu : usr=70.02%, sys=27.88%, ctx=92, majf=0, minf=40 00:21:55.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:55.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:55.713 issued rwts: total=18450,18456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:55.713 00:21:55.713 Run status group 0 (all jobs): 00:21:55.713 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:21:55.713 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:55.713 11:48:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.713 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:55.713 fio-3.35 00:21:55.713 Starting 1 thread 00:21:55.713 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.289 00:21:58.289 test: (groupid=0, jobs=1): err= 0: pid=3087265: Mon Jul 15 11:48:06 2024 00:21:58.289 read: IOPS=8258, BW=129MiB/s (135MB/s)(259MiB/2006msec) 00:21:58.289 slat (usec): min=2, max=127, avg= 4.45, stdev= 2.73 00:21:58.289 clat (usec): min=1816, max=17060, avg=8948.36, stdev=1987.66 00:21:58.289 lat (usec): min=1820, max=17064, avg=8952.81, stdev=1987.73 00:21:58.289 clat percentiles (usec): 00:21:58.289 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7242], 00:21:58.289 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9503], 00:21:58.289 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12125], 00:21:58.289 | 99.00th=[14222], 99.50th=[14877], 99.90th=[16319], 99.95th=[16712], 00:21:58.289 | 99.99th=[16909] 00:21:58.289 bw ( KiB/s): min=62368, max=73504, per=51.19%, avg=67648.00, stdev=5994.57, samples=4 00:21:58.289 iops : min= 3898, max= 4594, avg=4228.00, stdev=374.66, samples=4 00:21:58.289 write: IOPS=4801, BW=75.0MiB/s (78.7MB/s)(138MiB/1841msec); 0 zone resets 00:21:58.289 slat (usec): min=30, max=196, avg=39.10, stdev= 6.83 00:21:58.289 clat (usec): min=3734, max=20624, avg=11489.87, stdev=1849.73 00:21:58.289 lat (usec): min=3769, max=20676, avg=11528.96, stdev=1849.77 00:21:58.289 clat percentiles (usec): 00:21:58.289 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:21:58.289 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:21:58.289 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[15008], 00:21:58.289 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19268], 99.95th=[19792], 00:21:58.289 | 99.99th=[20579] 00:21:58.289 bw ( KiB/s): min=63648, max=76768, per=91.59%, avg=70368.00, stdev=6861.79, samples=4 00:21:58.289 iops : min= 3978, max= 4798, avg=4398.00, stdev=428.86, samples=4 00:21:58.289 lat (msec) : 2=0.02%, 4=0.16%, 10=51.20%, 20=48.61%, 50=0.02% 00:21:58.289 cpu : usr=81.85%, sys=17.11%, ctx=20, majf=0, minf=60 00:21:58.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:58.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.289 issued rwts: total=16567,8840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.289 00:21:58.289 Run status group 0 (all jobs): 00:21:58.289 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2006-2006msec 00:21:58.289 WRITE: bw=75.0MiB/s (78.7MB/s), 75.0MiB/s-75.0MiB/s (78.7MB/s-78.7MB/s), io=138MiB (145MB), run=1841-1841msec 00:21:58.289 11:48:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.603 11:48:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.604 rmmod nvme_tcp 00:21:58.604 rmmod nvme_fabrics 00:21:58.604 rmmod nvme_keyring 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3086454 ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3086454 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3086454 ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3086454 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3086454 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3086454' 00:21:58.604 killing process with pid 3086454 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3086454 00:21:58.604 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3086454 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.862 11:48:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.763 11:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.763 00:22:00.763 real 0m12.021s 00:22:00.763 user 0m35.924s 00:22:00.763 sys 0m3.718s 00:22:00.763 11:48:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.763 11:48:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.763 ************************************ 00:22:00.763 END TEST nvmf_fio_host 00:22:00.763 ************************************ 00:22:01.022 11:48:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:01.022 11:48:08 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.022 11:48:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:01.022 11:48:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.022 11:48:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.022 ************************************ 00:22:01.022 START TEST nvmf_failover 00:22:01.022 ************************************ 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.022 * Looking for test storage... 00:22:01.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:01.022 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.023 11:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:02.928 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:02.928 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:02.928 Found net devices under 0000:84:00.0: cvl_0_0 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:02.928 Found net devices under 0000:84:00.1: cvl_0_1 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.928 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.929 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:22:03.187 00:22:03.187 --- 10.0.0.2 ping statistics --- 00:22:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.187 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:03.187 00:22:03.187 --- 10.0.0.1 ping statistics --- 00:22:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.187 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.187 11:48:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.187 11:48:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3089983 00:22:03.187 11:48:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:03.187 11:48:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3089983 00:22:03.187 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3089983 ']' 00:22:03.187 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.188 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.188 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.188 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.188 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.188 [2024-07-15 11:48:11.050184] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:03.188 [2024-07-15 11:48:11.050282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.188 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.188 [2024-07-15 11:48:11.117605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.445 [2024-07-15 11:48:11.230317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.445 [2024-07-15 11:48:11.230373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.445 [2024-07-15 11:48:11.230395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.445 [2024-07-15 11:48:11.230406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.445 [2024-07-15 11:48:11.230417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.445 [2024-07-15 11:48:11.230508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.445 [2024-07-15 11:48:11.230646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.445 [2024-07-15 11:48:11.230654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.445 11:48:11 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:03.702 [2024-07-15 11:48:11.600156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.702 11:48:11 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:03.960 Malloc0 00:22:03.960 11:48:11 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.218 11:48:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.476 11:48:12 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.734 [2024-07-15 11:48:12.633138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.734 11:48:12 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:04.992 [2024-07-15 11:48:12.873700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:04.992 11:48:12 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:05.250 [2024-07-15 11:48:13.114503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3090267 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3090267 /var/tmp/bdevperf.sock 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3090267 ']' 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.250 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:05.506 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.506 11:48:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:05.507 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.071 NVMe0n1 00:22:06.071 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.329 00:22:06.329 11:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3090400 00:22:06.329 11:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.329 11:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:07.266 11:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.524 11:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:10.813 11:48:18 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.071 00:22:11.071 11:48:18 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:11.330 [2024-07-15 11:48:19.148611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.148993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 [2024-07-15 11:48:19.149202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a910 is same with the state(5) to be set 00:22:11.330 11:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:14.625 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.625 [2024-07-15 11:48:22.436762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.625 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:15.560 11:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:15.820 [2024-07-15 11:48:23.730356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 [2024-07-15 11:48:23.730791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ef0 is same with the state(5) to be set 00:22:15.820 11:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3090400 00:22:22.386 0 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3090267 ']' 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090267' 00:22:22.386 killing process with pid 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3090267 00:22:22.386 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:22.386 [2024-07-15 11:48:13.174577] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:22.386 [2024-07-15 11:48:13.174659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090267 ] 00:22:22.386 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.386 [2024-07-15 11:48:13.235314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.386 [2024-07-15 11:48:13.345883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.386 Running I/O for 15 seconds... 00:22:22.386 [2024-07-15 11:48:15.448685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.448774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.448978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.448992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.386 [2024-07-15 11:48:15.449610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.449975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.449989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.450002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.450034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.386 [2024-07-15 11:48:15.450049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.386 [2024-07-15 11:48:15.450062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.450098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.387 [2024-07-15 11:48:15.450909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.450937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.450969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.450984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.450997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.451980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.451993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.387 [2024-07-15 11:48:15.452455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.387 [2024-07-15 11:48:15.452469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:15.452507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.388 [2024-07-15 11:48:15.452555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.388 [2024-07-15 11:48:15.452567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:22:22.388 [2024-07-15 11:48:15.452581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452640] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d2fc40 was disconnected and freed. reset controller. 00:22:22.388 [2024-07-15 11:48:15.452658] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:22.388 [2024-07-15 11:48:15.452691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.388 [2024-07-15 11:48:15.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.388 [2024-07-15 11:48:15.452760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.388 [2024-07-15 11:48:15.452789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.388 [2024-07-15 11:48:15.452815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:15.452828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.388 [2024-07-15 11:48:15.456099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.388 [2024-07-15 11:48:15.456135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09790 (9): Bad file descriptor 00:22:22.388 [2024-07-15 11:48:15.492224] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.388 [2024-07-15 11:48:19.150784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.150830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.150858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.150901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.150916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.150931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.150945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.150960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.150973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.150988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.151402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.151981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.151997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.152055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.152083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.152111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.152140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.388 [2024-07-15 11:48:19.152169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.388 [2024-07-15 11:48:19.152665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.388 [2024-07-15 11:48:19.152679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.152973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.152988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.153000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.153055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.389 [2024-07-15 11:48:19.153086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91512 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91520 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91528 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91536 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91544 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91552 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91560 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91568 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91576 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91584 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91592 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91600 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91608 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91624 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91632 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91640 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.153965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.153976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.153986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91656 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.153999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91664 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91672 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91688 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91696 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91720 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91728 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91736 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91752 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91760 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91768 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91776 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91784 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91792 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91800 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91808 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.389 [2024-07-15 11:48:19.154900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.389 [2024-07-15 11:48:19.154911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.389 [2024-07-15 11:48:19.154925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91816 len:8 PRP1 0x0 PRP2 0x0 00:22:22.389 [2024-07-15 11:48:19.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.154958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.154969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.154980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91824 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.154992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91832 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91840 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91848 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91032 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91040 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91048 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91056 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91064 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91072 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.390 [2024-07-15 11:48:19.155429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.390 [2024-07-15 11:48:19.155440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91080 len:8 PRP1 0x0 PRP2 0x0 00:22:22.390 [2024-07-15 11:48:19.155451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155509] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ed4500 was disconnected and freed. reset controller. 00:22:22.390 [2024-07-15 11:48:19.155526] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:22.390 [2024-07-15 11:48:19.155560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.390 [2024-07-15 11:48:19.155578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.390 [2024-07-15 11:48:19.155606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.390 [2024-07-15 11:48:19.155632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.390 [2024-07-15 11:48:19.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:19.155670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.390 [2024-07-15 11:48:19.158905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.390 [2024-07-15 11:48:19.158944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09790 (9): Bad file descriptor 00:22:22.390 [2024-07-15 11:48:19.308186] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.390 [2024-07-15 11:48:23.731734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.731973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.731987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.390 [2024-07-15 11:48:23.732690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.732978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.390 [2024-07-15 11:48:23.733204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.390 [2024-07-15 11:48:23.733217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.733479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.733984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.391 [2024-07-15 11:48:23.734592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.734984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.734997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.391 [2024-07-15 11:48:23.735465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.391 [2024-07-15 11:48:23.735512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.391 [2024-07-15 11:48:23.735524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62280 len:8 PRP1 0x0 PRP2 0x0 00:22:22.391 [2024-07-15 11:48:23.735537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735602] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d39750 was disconnected and freed. reset controller. 00:22:22.391 [2024-07-15 11:48:23.735620] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:22.391 [2024-07-15 11:48:23.735655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.391 [2024-07-15 11:48:23.735673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.391 [2024-07-15 11:48:23.735688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.391 [2024-07-15 11:48:23.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.392 [2024-07-15 11:48:23.735715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.392 [2024-07-15 11:48:23.735727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.392 [2024-07-15 11:48:23.735748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.392 [2024-07-15 11:48:23.735762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.392 [2024-07-15 11:48:23.735775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.392 [2024-07-15 11:48:23.739020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.392 [2024-07-15 11:48:23.739061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09790 (9): Bad file descriptor 00:22:22.392 [2024-07-15 11:48:23.772344] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.392 00:22:22.392 Latency(us) 00:22:22.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.392 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:22.392 Verification LBA range: start 0x0 length 0x4000 00:22:22.392 NVMe0n1 : 15.01 8794.23 34.35 567.39 0.00 13646.92 594.68 14854.83 00:22:22.392 =================================================================================================================== 00:22:22.392 Total : 8794.23 34.35 567.39 0.00 13646.92 594.68 14854.83 00:22:22.392 Received shutdown signal, test time was about 15.000000 seconds 00:22:22.392 00:22:22.392 Latency(us) 00:22:22.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.392 =================================================================================================================== 00:22:22.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3092249 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3092249 /var/tmp/bdevperf.sock 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3092249 ']' 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:22.392 11:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.392 [2024-07-15 11:48:30.228500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.392 11:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:22.650 [2024-07-15 11:48:30.493271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:22.650 11:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.908 NVMe0n1 00:22:22.908 11:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.477 00:22:23.477 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.734 00:22:23.734 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.734 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:23.992 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.249 11:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:27.547 11:48:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.547 11:48:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:27.547 11:48:35 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3092914 00:22:27.547 11:48:35 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.547 11:48:35 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3092914 00:22:28.959 0 00:22:28.959 11:48:36 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.959 [2024-07-15 11:48:29.700497] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:28.959 [2024-07-15 11:48:29.700581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092249 ] 00:22:28.959 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.959 [2024-07-15 11:48:29.759313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.959 [2024-07-15 11:48:29.865238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.959 [2024-07-15 11:48:32.141942] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:28.959 [2024-07-15 11:48:32.142031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.959 [2024-07-15 11:48:32.142054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.959 [2024-07-15 11:48:32.142070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.959 [2024-07-15 11:48:32.142083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.959 [2024-07-15 11:48:32.142097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.959 [2024-07-15 11:48:32.142110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.959 [2024-07-15 11:48:32.142134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.959 [2024-07-15 11:48:32.142147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.959 [2024-07-15 11:48:32.142160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.959 [2024-07-15 11:48:32.142210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.959 [2024-07-15 11:48:32.142240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a2790 (9): Bad file descriptor 00:22:28.959 [2024-07-15 11:48:32.152914] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:28.959 Running I/O for 1 seconds... 00:22:28.959 00:22:28.959 Latency(us) 00:22:28.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.959 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:28.959 Verification LBA range: start 0x0 length 0x4000 00:22:28.959 NVMe0n1 : 1.01 8812.87 34.43 0.00 0.00 14457.70 1116.54 12621.75 00:22:28.959 =================================================================================================================== 00:22:28.959 Total : 8812.87 34.43 0.00 0.00 14457.70 1116.54 12621.75 00:22:28.959 11:48:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.959 11:48:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:28.959 11:48:36 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.217 11:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:29.217 11:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:29.475 11:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.733 11:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3092249 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3092249 ']' 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3092249 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092249 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092249' 00:22:33.023 killing process with pid 3092249 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3092249 00:22:33.023 11:48:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3092249 00:22:33.282 11:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:33.282 11:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.540 rmmod nvme_tcp 00:22:33.540 rmmod nvme_fabrics 00:22:33.540 rmmod nvme_keyring 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3089983 ']' 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3089983 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3089983 ']' 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3089983 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089983 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089983' 00:22:33.540 killing process with pid 3089983 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3089983 00:22:33.540 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3089983 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.108 11:48:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.009 11:48:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.009 00:22:36.009 real 0m35.078s 00:22:36.009 user 2m3.443s 00:22:36.009 sys 0m6.147s 00:22:36.009 11:48:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.009 11:48:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.009 ************************************ 00:22:36.009 END TEST nvmf_failover 00:22:36.009 ************************************ 00:22:36.009 11:48:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:36.009 11:48:43 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:36.009 11:48:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:36.009 11:48:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.009 11:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.009 ************************************ 00:22:36.009 START TEST nvmf_host_discovery 00:22:36.009 ************************************ 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:36.009 * Looking for test storage... 00:22:36.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.009 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.010 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:38.604 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:38.604 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:38.604 Found net devices under 0000:84:00.0: cvl_0_0 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:38.604 Found net devices under 0000:84:00.1: cvl_0_1 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.604 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:22:38.605 00:22:38.605 --- 10.0.0.2 ping statistics --- 00:22:38.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.605 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:38.605 00:22:38.605 --- 10.0.0.1 ping statistics --- 00:22:38.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.605 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3095654 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3095654 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3095654 ']' 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.605 [2024-07-15 11:48:46.267877] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:38.605 [2024-07-15 11:48:46.267955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.605 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.605 [2024-07-15 11:48:46.331365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.605 [2024-07-15 11:48:46.438864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.605 [2024-07-15 11:48:46.438918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.605 [2024-07-15 11:48:46.438948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.605 [2024-07-15 11:48:46.438959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.605 [2024-07-15 11:48:46.438969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.605 [2024-07-15 11:48:46.438997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.605 [2024-07-15 11:48:46.579865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.605 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.605 [2024-07-15 11:48:46.588054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.864 null0 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.864 null1 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3095678 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3095678 /tmp/host.sock 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3095678 ']' 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:38.864 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.864 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.864 [2024-07-15 11:48:46.665245] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:38.864 [2024-07-15 11:48:46.665318] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095678 ] 00:22:38.864 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.864 [2024-07-15 11:48:46.724124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.864 [2024-07-15 11:48:46.832825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.123 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.123 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.124 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.124 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.124 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.382 [2024-07-15 11:48:47.253884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:39.382 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.383 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:39.641 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:40.211 [2024-07-15 11:48:47.977438] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:40.211 [2024-07-15 11:48:47.977466] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:40.211 [2024-07-15 11:48:47.977488] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.211 [2024-07-15 11:48:48.064779] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:40.470 [2024-07-15 11:48:48.250831] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.470 [2024-07-15 11:48:48.250863] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.470 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.729 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.730 [2024-07-15 11:48:48.705933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.730 [2024-07-15 11:48:48.706326] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:40.730 [2024-07-15 11:48:48.706362] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.730 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.989 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.990 [2024-07-15 11:48:48.792025] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cn 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.990 ode0:10.0.0.2:4421 new path for nvme0 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:40.990 11:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:41.249 [2024-07-15 11:48:49.015158] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:41.250 [2024-07-15 11:48:49.015180] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:41.250 [2024-07-15 11:48:49.015189] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.189 [2024-07-15 11:48:49.913927] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:42.189 [2024-07-15 11:48:49.913959] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.189 [2024-07-15 11:48:49.922102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.189 [2024-07-15 11:48:49.922164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.189 [2024-07-15 11:48:49.922192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.189 [2024-07-15 11:48:49.922206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.189 [2024-07-15 11:48:49.922220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.189 [2024-07-15 11:48:49.922235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.189 [2024-07-15 11:48:49.922248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.189 [2024-07-15 11:48:49.922262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.189 [2024-07-15 11:48:49.922275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.189 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.189 [2024-07-15 11:48:49.932120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.189 [2024-07-15 11:48:49.942177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.189 [2024-07-15 11:48:49.942407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.189 [2024-07-15 11:48:49.942435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.189 [2024-07-15 11:48:49.942451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.942472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.942492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.942505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.942519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.942537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 [2024-07-15 11:48:49.952250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.190 [2024-07-15 11:48:49.952455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.190 [2024-07-15 11:48:49.952492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.190 [2024-07-15 11:48:49.952507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.952528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.952547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.952560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.952572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.952589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.190 [2024-07-15 11:48:49.962334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.190 [2024-07-15 11:48:49.962523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.190 [2024-07-15 11:48:49.962551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.190 [2024-07-15 11:48:49.962567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.962603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.962623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.962636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.962648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.962666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 [2024-07-15 11:48:49.972423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.190 [2024-07-15 11:48:49.972621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.190 [2024-07-15 11:48:49.972648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.190 [2024-07-15 11:48:49.972663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.972684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.972716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.972759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.972774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.972803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 [2024-07-15 11:48:49.982509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.190 [2024-07-15 11:48:49.982746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.190 [2024-07-15 11:48:49.982774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.190 [2024-07-15 11:48:49.982796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.982819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.982852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.982870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.982883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.982914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.190 [2024-07-15 11:48:49.992592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.190 [2024-07-15 11:48:49.992840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.190 [2024-07-15 11:48:49.992869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110d210 with addr=10.0.0.2, port=4420 00:22:42.190 [2024-07-15 11:48:49.992884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110d210 is same with the state(5) to be set 00:22:42.190 [2024-07-15 11:48:49.992906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110d210 (9): Bad file descriptor 00:22:42.190 [2024-07-15 11:48:49.992938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.190 [2024-07-15 11:48:49.992956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.190 [2024-07-15 11:48:49.992969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.190 [2024-07-15 11:48:49.992988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.190 [2024-07-15 11:48:49.999954] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:42.190 [2024-07-15 11:48:49.999983] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.190 11:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.190 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.191 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.451 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.388 [2024-07-15 11:48:51.276399] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:43.388 [2024-07-15 11:48:51.276445] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:43.388 [2024-07-15 11:48:51.276468] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:43.388 [2024-07-15 11:48:51.363692] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:43.647 [2024-07-15 11:48:51.472043] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:43.647 [2024-07-15 11:48:51.472091] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.647 request: 00:22:43.647 { 00:22:43.647 "name": "nvme", 00:22:43.647 "trtype": "tcp", 00:22:43.647 "traddr": "10.0.0.2", 00:22:43.647 "adrfam": "ipv4", 00:22:43.647 "trsvcid": "8009", 00:22:43.647 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.647 "wait_for_attach": true, 00:22:43.647 "method": "bdev_nvme_start_discovery", 00:22:43.647 "req_id": 1 00:22:43.647 } 00:22:43.647 Got JSON-RPC error response 00:22:43.647 response: 00:22:43.647 { 00:22:43.647 "code": -17, 00:22:43.647 "message": "File exists" 00:22:43.647 } 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.647 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.648 request: 00:22:43.648 { 00:22:43.648 "name": "nvme_second", 00:22:43.648 "trtype": "tcp", 00:22:43.648 "traddr": "10.0.0.2", 00:22:43.648 "adrfam": "ipv4", 00:22:43.648 "trsvcid": "8009", 00:22:43.648 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.648 "wait_for_attach": true, 00:22:43.648 "method": "bdev_nvme_start_discovery", 00:22:43.648 "req_id": 1 00:22:43.648 } 00:22:43.648 Got JSON-RPC error response 00:22:43.648 response: 00:22:43.648 { 00:22:43.648 "code": -17, 00:22:43.648 "message": "File exists" 00:22:43.648 } 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.648 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.906 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.842 [2024-07-15 11:48:52.683753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.842 [2024-07-15 11:48:52.683820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12eeb40 with addr=10.0.0.2, port=8010 00:22:44.842 [2024-07-15 11:48:52.683852] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:44.842 [2024-07-15 11:48:52.683866] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:44.842 [2024-07-15 11:48:52.683877] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:45.789 [2024-07-15 11:48:53.686262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.789 [2024-07-15 11:48:53.686331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12eeb40 with addr=10.0.0.2, port=8010 00:22:45.789 [2024-07-15 11:48:53.686359] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:45.789 [2024-07-15 11:48:53.686373] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:45.789 [2024-07-15 11:48:53.686385] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:46.728 [2024-07-15 11:48:54.688371] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:46.728 request: 00:22:46.728 { 00:22:46.728 "name": "nvme_second", 00:22:46.728 "trtype": "tcp", 00:22:46.728 "traddr": "10.0.0.2", 00:22:46.728 "adrfam": "ipv4", 00:22:46.728 "trsvcid": "8010", 00:22:46.728 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:46.728 "wait_for_attach": false, 00:22:46.728 "attach_timeout_ms": 3000, 00:22:46.728 "method": "bdev_nvme_start_discovery", 00:22:46.728 "req_id": 1 00:22:46.728 } 00:22:46.728 Got JSON-RPC error response 00:22:46.728 response: 00:22:46.728 { 00:22:46.728 "code": -110, 00:22:46.728 "message": "Connection timed out" 00:22:46.728 } 00:22:46.728 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:46.728 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:46.729 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3095678 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.989 rmmod nvme_tcp 00:22:46.989 rmmod nvme_fabrics 00:22:46.989 rmmod nvme_keyring 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3095654 ']' 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3095654 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3095654 ']' 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3095654 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3095654 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3095654' 00:22:46.989 killing process with pid 3095654 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3095654 00:22:46.989 11:48:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3095654 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.248 11:48:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.784 00:22:49.784 real 0m13.249s 00:22:49.784 user 0m19.013s 00:22:49.784 sys 0m2.869s 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.784 ************************************ 00:22:49.784 END TEST nvmf_host_discovery 00:22:49.784 ************************************ 00:22:49.784 11:48:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:49.784 11:48:57 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.784 11:48:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.784 11:48:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.784 11:48:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.784 ************************************ 00:22:49.784 START TEST nvmf_host_multipath_status 00:22:49.784 ************************************ 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.784 * Looking for test storage... 00:22:49.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.784 11:48:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.690 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:51.691 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:51.691 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:51.691 Found net devices under 0000:84:00.0: cvl_0_0 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:51.691 Found net devices under 0000:84:00.1: cvl_0_1 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:22:51.691 00:22:51.691 --- 10.0.0.2 ping statistics --- 00:22:51.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.691 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:51.691 00:22:51.691 --- 10.0.0.1 ping statistics --- 00:22:51.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.691 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3098725 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3098725 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3098725 ']' 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.691 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:51.691 [2024-07-15 11:48:59.573887] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:51.691 [2024-07-15 11:48:59.573960] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.691 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.691 [2024-07-15 11:48:59.637815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:51.950 [2024-07-15 11:48:59.747542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.950 [2024-07-15 11:48:59.747608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.950 [2024-07-15 11:48:59.747637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.950 [2024-07-15 11:48:59.747649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.950 [2024-07-15 11:48:59.747658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.950 [2024-07-15 11:48:59.747711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.950 [2024-07-15 11:48:59.747716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3098725 00:22:51.950 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:52.208 [2024-07-15 11:49:00.156751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.208 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:52.466 Malloc0 00:22:52.466 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:53.034 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.294 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.623 [2024-07-15 11:49:01.304254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.623 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:53.623 [2024-07-15 11:49:01.552890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3099014 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3099014 /var/tmp/bdevperf.sock 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3099014 ']' 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.903 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.162 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.162 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:54.162 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:54.420 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:54.991 Nvme0n1 00:22:54.991 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:55.559 Nvme0n1 00:22:55.559 11:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:55.559 11:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.464 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:57.464 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:57.722 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:57.980 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:59.353 11:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:59.353 11:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:59.353 11:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.353 11:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.353 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.353 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:59.353 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.353 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.611 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.611 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.611 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.611 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:59.867 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.867 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:59.867 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.867 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.124 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.124 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.124 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.124 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.381 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.381 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.381 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.381 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.637 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.637 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:00.637 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:01.217 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:01.217 11:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.590 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.848 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.848 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.848 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.848 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:03.106 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.106 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.106 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.106 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.364 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.364 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:03.364 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.364 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:03.621 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.621 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:03.621 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.621 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.188 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.188 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:04.188 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:04.188 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:04.447 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.824 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:06.082 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:06.082 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:06.082 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.082 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:06.340 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.340 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:06.340 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.340 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:06.599 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.599 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:06.599 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.599 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:07.166 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.166 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:07.166 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.166 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:07.423 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.423 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:07.423 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.681 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:07.939 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:08.875 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:08.875 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:08.875 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.875 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:09.133 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.133 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:09.133 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.133 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:09.392 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:09.392 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:09.392 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.392 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:09.650 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.650 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:09.650 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.650 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:09.908 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.908 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:09.908 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.908 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:10.167 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.167 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:10.167 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.167 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:10.425 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.425 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:10.425 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:10.992 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:10.992 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:12.366 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:12.366 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:12.366 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.366 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:12.366 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.366 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:12.366 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.366 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:12.624 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.624 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:12.624 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.624 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:12.882 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.882 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:12.882 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.882 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:13.140 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.140 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:13.140 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.140 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:13.398 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.398 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:13.398 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.398 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:13.656 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.656 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:13.656 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:13.913 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:14.172 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:15.105 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:15.105 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:15.105 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.105 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:15.363 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.363 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:15.363 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.363 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:15.620 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.620 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:15.620 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.620 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.888 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.888 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.888 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.888 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:16.150 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.150 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:16.150 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.150 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:16.407 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.407 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:16.407 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.408 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:16.664 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.664 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:16.920 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:16.920 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:17.177 11:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:17.434 11:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.812 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.069 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.069 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.069 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.069 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.326 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.326 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.326 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.326 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.583 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.583 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:19.583 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.583 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.148 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.148 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.148 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.148 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.148 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.148 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:20.148 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.406 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:20.970 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:21.908 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:21.908 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:21.908 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.908 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.166 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.166 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:22.166 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.166 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.425 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.425 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.425 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.425 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.683 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.683 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.683 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.683 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.940 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.940 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:22.940 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.940 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.199 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.199 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.199 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.199 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.457 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.457 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:23.457 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.715 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:23.974 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:25.349 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:25.349 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:25.349 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.349 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.349 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.349 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:25.349 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.349 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.607 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.607 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.607 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.607 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.865 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.865 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.865 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.865 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:26.123 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.123 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:26.123 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.123 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:26.384 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.384 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:26.642 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.642 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.901 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.901 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:26.901 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:27.157 11:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:27.416 11:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:28.350 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:28.350 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:28.350 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.350 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:28.609 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.609 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:28.609 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.609 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.867 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.867 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.867 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.867 11:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:29.126 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.126 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:29.126 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.126 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:29.384 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.384 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:29.384 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.384 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:29.949 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.949 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:29.949 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.949 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3099014 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3099014 ']' 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3099014 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.950 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3099014 00:23:30.208 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:30.208 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:30.208 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3099014' 00:23:30.208 killing process with pid 3099014 00:23:30.208 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3099014 00:23:30.208 11:49:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3099014 00:23:30.208 Connection closed with partial response: 00:23:30.208 00:23:30.208 00:23:30.470 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3099014 00:23:30.470 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.470 [2024-07-15 11:49:01.613295] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:30.470 [2024-07-15 11:49:01.613398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099014 ] 00:23:30.470 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.470 [2024-07-15 11:49:01.673871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.470 [2024-07-15 11:49:01.789209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.470 Running I/O for 90 seconds... 00:23:30.470 [2024-07-15 11:49:18.674607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.470 [2024-07-15 11:49:18.674689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.470 [2024-07-15 11:49:18.674813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.470 [2024-07-15 11:49:18.674837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.470 [2024-07-15 11:49:18.674862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.470 [2024-07-15 11:49:18.674880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:30.470 [2024-07-15 11:49:18.674903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.470 [2024-07-15 11:49:18.674920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:30.470 [2024-07-15 11:49:18.674943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.470 [2024-07-15 11:49:18.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:30.470 [2024-07-15 11:49:18.674982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.471 [2024-07-15 11:49:18.674999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.471 [2024-07-15 11:49:18.675039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.471 [2024-07-15 11:49:18.675095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.675707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.675747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.676968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.676985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.677778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.677795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.678961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.471 [2024-07-15 11:49:18.679596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.471 [2024-07-15 11:49:18.679612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.679972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.679999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.680967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.680995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:18.681679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:18.681696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:35.216602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:35.216715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:35.216784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.472 [2024-07-15 11:49:35.216836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.216927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.216988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.217004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.217846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.217899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.217916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.217954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.217975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.217991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.472 [2024-07-15 11:49:35.218485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:30.472 [2024-07-15 11:49:35.218505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.218977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.218998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.473 [2024-07-15 11:49:35.219832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:30.473 [2024-07-15 11:49:35.219928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.473 [2024-07-15 11:49:35.219944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:30.473 Received shutdown signal, test time was about 34.472602 seconds 00:23:30.473 00:23:30.473 Latency(us) 00:23:30.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.473 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.473 Verification LBA range: start 0x0 length 0x4000 00:23:30.473 Nvme0n1 : 34.47 8476.72 33.11 0.00 0.00 15076.40 737.28 4026531.84 00:23:30.473 =================================================================================================================== 00:23:30.473 Total : 8476.72 33.11 0.00 0.00 15076.40 737.28 4026531.84 00:23:30.473 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.733 rmmod nvme_tcp 00:23:30.733 rmmod nvme_fabrics 00:23:30.733 rmmod nvme_keyring 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3098725 ']' 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3098725 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3098725 ']' 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3098725 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098725 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098725' 00:23:30.733 killing process with pid 3098725 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3098725 00:23:30.733 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3098725 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.989 11:49:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.521 11:49:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.521 00:23:33.521 real 0m43.689s 00:23:33.521 user 2m11.872s 00:23:33.521 sys 0m12.102s 00:23:33.521 11:49:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.521 11:49:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:33.521 ************************************ 00:23:33.521 END TEST nvmf_host_multipath_status 00:23:33.521 ************************************ 00:23:33.521 11:49:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.521 11:49:40 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:33.521 11:49:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.521 11:49:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.521 11:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.521 ************************************ 00:23:33.521 START TEST nvmf_discovery_remove_ifc 00:23:33.521 ************************************ 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:33.521 * Looking for test storage... 00:23:33.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.521 11:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.521 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.522 11:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.422 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:35.423 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:35.423 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:35.423 Found net devices under 0000:84:00.0: cvl_0_0 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:35.423 Found net devices under 0000:84:00.1: cvl_0_1 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:23:35.423 00:23:35.423 --- 10.0.0.2 ping statistics --- 00:23:35.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.423 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:35.423 00:23:35.423 --- 10.0.0.1 ping statistics --- 00:23:35.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.423 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3105498 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3105498 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3105498 ']' 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.423 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.423 [2024-07-15 11:49:43.254341] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:35.423 [2024-07-15 11:49:43.254438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.423 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.423 [2024-07-15 11:49:43.320375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.708 [2024-07-15 11:49:43.424147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.708 [2024-07-15 11:49:43.424203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.708 [2024-07-15 11:49:43.424227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.708 [2024-07-15 11:49:43.424239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.708 [2024-07-15 11:49:43.424249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.708 [2024-07-15 11:49:43.424294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.708 [2024-07-15 11:49:43.557279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.708 [2024-07-15 11:49:43.565436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.708 null0 00:23:35.708 [2024-07-15 11:49:43.597404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3105528 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3105528 /tmp/host.sock 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3105528 ']' 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.708 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.708 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.708 [2024-07-15 11:49:43.659264] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:35.708 [2024-07-15 11:49:43.659329] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105528 ] 00:23:35.708 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.965 [2024-07-15 11:49:43.716979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.965 [2024-07-15 11:49:43.829871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.965 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.223 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.223 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:36.223 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.223 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.158 [2024-07-15 11:49:45.034909] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:37.158 [2024-07-15 11:49:45.034934] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:37.158 [2024-07-15 11:49:45.034957] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.158 [2024-07-15 11:49:45.121289] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:37.418 [2024-07-15 11:49:45.228978] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:37.418 [2024-07-15 11:49:45.229058] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:37.418 [2024-07-15 11:49:45.229098] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:37.418 [2024-07-15 11:49:45.229119] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.418 [2024-07-15 11:49:45.229154] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.418 [2024-07-15 11:49:45.233140] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xaf3e00 was disconnected and freed. delete nvme_qpair. 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.418 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:38.823 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.762 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.701 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.641 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.577 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.835 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.835 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.835 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.835 [2024-07-15 11:49:50.670317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:42.835 [2024-07-15 11:49:50.670379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.835 [2024-07-15 11:49:50.670400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.835 [2024-07-15 11:49:50.670419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.835 [2024-07-15 11:49:50.670436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.835 [2024-07-15 11:49:50.670449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.835 [2024-07-15 11:49:50.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.835 [2024-07-15 11:49:50.670480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.835 [2024-07-15 11:49:50.670492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.835 [2024-07-15 11:49:50.670505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.835 [2024-07-15 11:49:50.670527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.835 [2024-07-15 11:49:50.670542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba870 is same with the state(5) to be set 00:23:42.835 [2024-07-15 11:49:50.680336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaba870 (9): Bad file descriptor 00:23:42.835 [2024-07-15 11:49:50.690379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.768 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.768 [2024-07-15 11:49:51.751806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:43.768 [2024-07-15 11:49:51.751858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaba870 with addr=10.0.0.2, port=4420 00:23:43.768 [2024-07-15 11:49:51.751883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba870 is same with the state(5) to be set 00:23:43.768 [2024-07-15 11:49:51.751921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaba870 (9): Bad file descriptor 00:23:43.768 [2024-07-15 11:49:51.752342] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.768 [2024-07-15 11:49:51.752375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:43.768 [2024-07-15 11:49:51.752391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:43.768 [2024-07-15 11:49:51.752409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:43.768 [2024-07-15 11:49:51.752437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.768 [2024-07-15 11:49:51.752455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:44.031 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.031 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.031 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.971 [2024-07-15 11:49:52.754958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:44.971 [2024-07-15 11:49:52.755015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:44.971 [2024-07-15 11:49:52.755046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:44.971 [2024-07-15 11:49:52.755061] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:44.971 [2024-07-15 11:49:52.755106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.971 [2024-07-15 11:49:52.755146] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:44.971 [2024-07-15 11:49:52.755192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.971 [2024-07-15 11:49:52.755214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.971 [2024-07-15 11:49:52.755235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.971 [2024-07-15 11:49:52.755248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.971 [2024-07-15 11:49:52.755262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.971 [2024-07-15 11:49:52.755277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.971 [2024-07-15 11:49:52.755292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.971 [2024-07-15 11:49:52.755305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.971 [2024-07-15 11:49:52.755319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.971 [2024-07-15 11:49:52.755341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.971 [2024-07-15 11:49:52.755354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:44.971 [2024-07-15 11:49:52.755454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab9cf0 (9): Bad file descriptor 00:23:44.971 [2024-07-15 11:49:52.756487] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:44.971 [2024-07-15 11:49:52.756509] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.971 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:44.972 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.352 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.917 [2024-07-15 11:49:54.810446] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.917 [2024-07-15 11:49:54.810470] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.917 [2024-07-15 11:49:54.810492] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:47.176 [2024-07-15 11:49:54.937912] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:47.176 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.176 [2024-07-15 11:49:55.041714] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:47.176 [2024-07-15 11:49:55.041787] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:47.176 [2024-07-15 11:49:55.041825] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:47.176 [2024-07-15 11:49:55.041846] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:47.176 [2024-07-15 11:49:55.041858] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:47.176 [2024-07-15 11:49:55.048836] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xafd800 was disconnected and freed. delete nvme_qpair. 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.110 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3105528 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3105528 ']' 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3105528 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3105528 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3105528' 00:23:48.110 killing process with pid 3105528 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3105528 00:23:48.110 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3105528 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.368 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.368 rmmod nvme_tcp 00:23:48.368 rmmod nvme_fabrics 00:23:48.626 rmmod nvme_keyring 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3105498 ']' 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3105498 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3105498 ']' 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3105498 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3105498 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3105498' 00:23:48.626 killing process with pid 3105498 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3105498 00:23:48.626 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3105498 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.884 11:49:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.791 11:49:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.791 00:23:50.791 real 0m17.777s 00:23:50.791 user 0m25.737s 00:23:50.791 sys 0m3.032s 00:23:50.791 11:49:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.791 11:49:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.791 ************************************ 00:23:50.791 END TEST nvmf_discovery_remove_ifc 00:23:50.791 ************************************ 00:23:50.791 11:49:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.791 11:49:58 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.791 11:49:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.791 11:49:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.791 11:49:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.791 ************************************ 00:23:50.791 START TEST nvmf_identify_kernel_target 00:23:50.791 ************************************ 00:23:50.791 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:51.050 * Looking for test storage... 00:23:51.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.050 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.051 11:49:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:53.585 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:53.585 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:53.585 Found net devices under 0000:84:00.0: cvl_0_0 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:53.585 Found net devices under 0000:84:00.1: cvl_0_1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:23:53.585 00:23:53.585 --- 10.0.0.2 ping statistics --- 00:23:53.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.585 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:23:53.585 00:23:53.585 --- 10.0.0.1 ping statistics --- 00:23:53.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.585 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:53.585 11:50:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:54.520 Waiting for block devices as requested 00:23:54.520 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:54.778 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:54.778 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:54.778 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:55.037 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:55.037 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:55.037 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:55.295 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:55.295 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:55.295 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:55.295 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:55.553 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:55.553 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:55.553 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:55.553 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:55.863 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:55.863 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:56.121 No valid GPT data, bailing 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:56.121 00:23:56.121 Discovery Log Number of Records 2, Generation counter 2 00:23:56.121 =====Discovery Log Entry 0====== 00:23:56.121 trtype: tcp 00:23:56.121 adrfam: ipv4 00:23:56.121 subtype: current discovery subsystem 00:23:56.121 treq: not specified, sq flow control disable supported 00:23:56.121 portid: 1 00:23:56.121 trsvcid: 4420 00:23:56.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:56.121 traddr: 10.0.0.1 00:23:56.121 eflags: none 00:23:56.121 sectype: none 00:23:56.121 =====Discovery Log Entry 1====== 00:23:56.121 trtype: tcp 00:23:56.121 adrfam: ipv4 00:23:56.121 subtype: nvme subsystem 00:23:56.121 treq: not specified, sq flow control disable supported 00:23:56.121 portid: 1 00:23:56.121 trsvcid: 4420 00:23:56.121 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:56.121 traddr: 10.0.0.1 00:23:56.121 eflags: none 00:23:56.121 sectype: none 00:23:56.121 11:50:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:56.121 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:56.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.121 ===================================================== 00:23:56.121 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:56.121 ===================================================== 00:23:56.121 Controller Capabilities/Features 00:23:56.121 ================================ 00:23:56.121 Vendor ID: 0000 00:23:56.121 Subsystem Vendor ID: 0000 00:23:56.121 Serial Number: 841981348920bfd674ef 00:23:56.121 Model Number: Linux 00:23:56.121 Firmware Version: 6.7.0-68 00:23:56.121 Recommended Arb Burst: 0 00:23:56.121 IEEE OUI Identifier: 00 00 00 00:23:56.121 Multi-path I/O 00:23:56.121 May have multiple subsystem ports: No 00:23:56.121 May have multiple controllers: No 00:23:56.121 Associated with SR-IOV VF: No 00:23:56.121 Max Data Transfer Size: Unlimited 00:23:56.121 Max Number of Namespaces: 0 00:23:56.121 Max Number of I/O Queues: 1024 00:23:56.121 NVMe Specification Version (VS): 1.3 00:23:56.121 NVMe Specification Version (Identify): 1.3 00:23:56.121 Maximum Queue Entries: 1024 00:23:56.121 Contiguous Queues Required: No 00:23:56.121 Arbitration Mechanisms Supported 00:23:56.121 Weighted Round Robin: Not Supported 00:23:56.121 Vendor Specific: Not Supported 00:23:56.121 Reset Timeout: 7500 ms 00:23:56.121 Doorbell Stride: 4 bytes 00:23:56.121 NVM Subsystem Reset: Not Supported 00:23:56.121 Command Sets Supported 00:23:56.121 NVM Command Set: Supported 00:23:56.121 Boot Partition: Not Supported 00:23:56.121 Memory Page Size Minimum: 4096 bytes 00:23:56.121 Memory Page Size Maximum: 4096 bytes 00:23:56.121 Persistent Memory Region: Not Supported 00:23:56.121 Optional Asynchronous Events Supported 00:23:56.121 Namespace Attribute Notices: Not Supported 00:23:56.121 Firmware Activation Notices: Not Supported 00:23:56.121 ANA Change Notices: Not Supported 00:23:56.121 PLE Aggregate Log Change Notices: Not Supported 00:23:56.121 LBA Status Info Alert Notices: Not Supported 00:23:56.121 EGE Aggregate Log Change Notices: Not Supported 00:23:56.121 Normal NVM Subsystem Shutdown event: Not Supported 00:23:56.121 Zone Descriptor Change Notices: Not Supported 00:23:56.121 Discovery Log Change Notices: Supported 00:23:56.121 Controller Attributes 00:23:56.121 128-bit Host Identifier: Not Supported 00:23:56.121 Non-Operational Permissive Mode: Not Supported 00:23:56.121 NVM Sets: Not Supported 00:23:56.121 Read Recovery Levels: Not Supported 00:23:56.121 Endurance Groups: Not Supported 00:23:56.121 Predictable Latency Mode: Not Supported 00:23:56.121 Traffic Based Keep ALive: Not Supported 00:23:56.121 Namespace Granularity: Not Supported 00:23:56.121 SQ Associations: Not Supported 00:23:56.121 UUID List: Not Supported 00:23:56.121 Multi-Domain Subsystem: Not Supported 00:23:56.121 Fixed Capacity Management: Not Supported 00:23:56.121 Variable Capacity Management: Not Supported 00:23:56.121 Delete Endurance Group: Not Supported 00:23:56.121 Delete NVM Set: Not Supported 00:23:56.121 Extended LBA Formats Supported: Not Supported 00:23:56.121 Flexible Data Placement Supported: Not Supported 00:23:56.121 00:23:56.121 Controller Memory Buffer Support 00:23:56.121 ================================ 00:23:56.121 Supported: No 00:23:56.121 00:23:56.121 Persistent Memory Region Support 00:23:56.121 ================================ 00:23:56.121 Supported: No 00:23:56.121 00:23:56.121 Admin Command Set Attributes 00:23:56.121 ============================ 00:23:56.121 Security Send/Receive: Not Supported 00:23:56.121 Format NVM: Not Supported 00:23:56.121 Firmware Activate/Download: Not Supported 00:23:56.122 Namespace Management: Not Supported 00:23:56.122 Device Self-Test: Not Supported 00:23:56.122 Directives: Not Supported 00:23:56.122 NVMe-MI: Not Supported 00:23:56.122 Virtualization Management: Not Supported 00:23:56.122 Doorbell Buffer Config: Not Supported 00:23:56.122 Get LBA Status Capability: Not Supported 00:23:56.122 Command & Feature Lockdown Capability: Not Supported 00:23:56.122 Abort Command Limit: 1 00:23:56.122 Async Event Request Limit: 1 00:23:56.122 Number of Firmware Slots: N/A 00:23:56.122 Firmware Slot 1 Read-Only: N/A 00:23:56.122 Firmware Activation Without Reset: N/A 00:23:56.122 Multiple Update Detection Support: N/A 00:23:56.122 Firmware Update Granularity: No Information Provided 00:23:56.122 Per-Namespace SMART Log: No 00:23:56.122 Asymmetric Namespace Access Log Page: Not Supported 00:23:56.122 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:56.122 Command Effects Log Page: Not Supported 00:23:56.122 Get Log Page Extended Data: Supported 00:23:56.122 Telemetry Log Pages: Not Supported 00:23:56.122 Persistent Event Log Pages: Not Supported 00:23:56.122 Supported Log Pages Log Page: May Support 00:23:56.122 Commands Supported & Effects Log Page: Not Supported 00:23:56.122 Feature Identifiers & Effects Log Page:May Support 00:23:56.122 NVMe-MI Commands & Effects Log Page: May Support 00:23:56.122 Data Area 4 for Telemetry Log: Not Supported 00:23:56.122 Error Log Page Entries Supported: 1 00:23:56.122 Keep Alive: Not Supported 00:23:56.122 00:23:56.122 NVM Command Set Attributes 00:23:56.122 ========================== 00:23:56.122 Submission Queue Entry Size 00:23:56.122 Max: 1 00:23:56.122 Min: 1 00:23:56.122 Completion Queue Entry Size 00:23:56.122 Max: 1 00:23:56.122 Min: 1 00:23:56.122 Number of Namespaces: 0 00:23:56.122 Compare Command: Not Supported 00:23:56.122 Write Uncorrectable Command: Not Supported 00:23:56.122 Dataset Management Command: Not Supported 00:23:56.122 Write Zeroes Command: Not Supported 00:23:56.122 Set Features Save Field: Not Supported 00:23:56.122 Reservations: Not Supported 00:23:56.122 Timestamp: Not Supported 00:23:56.122 Copy: Not Supported 00:23:56.122 Volatile Write Cache: Not Present 00:23:56.122 Atomic Write Unit (Normal): 1 00:23:56.122 Atomic Write Unit (PFail): 1 00:23:56.122 Atomic Compare & Write Unit: 1 00:23:56.122 Fused Compare & Write: Not Supported 00:23:56.122 Scatter-Gather List 00:23:56.122 SGL Command Set: Supported 00:23:56.122 SGL Keyed: Not Supported 00:23:56.122 SGL Bit Bucket Descriptor: Not Supported 00:23:56.122 SGL Metadata Pointer: Not Supported 00:23:56.122 Oversized SGL: Not Supported 00:23:56.122 SGL Metadata Address: Not Supported 00:23:56.122 SGL Offset: Supported 00:23:56.122 Transport SGL Data Block: Not Supported 00:23:56.122 Replay Protected Memory Block: Not Supported 00:23:56.122 00:23:56.122 Firmware Slot Information 00:23:56.122 ========================= 00:23:56.122 Active slot: 0 00:23:56.122 00:23:56.122 00:23:56.122 Error Log 00:23:56.122 ========= 00:23:56.122 00:23:56.122 Active Namespaces 00:23:56.122 ================= 00:23:56.122 Discovery Log Page 00:23:56.122 ================== 00:23:56.122 Generation Counter: 2 00:23:56.122 Number of Records: 2 00:23:56.122 Record Format: 0 00:23:56.122 00:23:56.122 Discovery Log Entry 0 00:23:56.122 ---------------------- 00:23:56.122 Transport Type: 3 (TCP) 00:23:56.122 Address Family: 1 (IPv4) 00:23:56.122 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:56.122 Entry Flags: 00:23:56.122 Duplicate Returned Information: 0 00:23:56.122 Explicit Persistent Connection Support for Discovery: 0 00:23:56.122 Transport Requirements: 00:23:56.122 Secure Channel: Not Specified 00:23:56.122 Port ID: 1 (0x0001) 00:23:56.122 Controller ID: 65535 (0xffff) 00:23:56.122 Admin Max SQ Size: 32 00:23:56.122 Transport Service Identifier: 4420 00:23:56.122 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:56.122 Transport Address: 10.0.0.1 00:23:56.122 Discovery Log Entry 1 00:23:56.122 ---------------------- 00:23:56.122 Transport Type: 3 (TCP) 00:23:56.122 Address Family: 1 (IPv4) 00:23:56.122 Subsystem Type: 2 (NVM Subsystem) 00:23:56.122 Entry Flags: 00:23:56.122 Duplicate Returned Information: 0 00:23:56.122 Explicit Persistent Connection Support for Discovery: 0 00:23:56.122 Transport Requirements: 00:23:56.122 Secure Channel: Not Specified 00:23:56.122 Port ID: 1 (0x0001) 00:23:56.122 Controller ID: 65535 (0xffff) 00:23:56.122 Admin Max SQ Size: 32 00:23:56.122 Transport Service Identifier: 4420 00:23:56.122 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:56.122 Transport Address: 10.0.0.1 00:23:56.122 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.122 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.380 get_feature(0x01) failed 00:23:56.380 get_feature(0x02) failed 00:23:56.380 get_feature(0x04) failed 00:23:56.380 ===================================================== 00:23:56.380 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:56.380 ===================================================== 00:23:56.380 Controller Capabilities/Features 00:23:56.380 ================================ 00:23:56.380 Vendor ID: 0000 00:23:56.380 Subsystem Vendor ID: 0000 00:23:56.380 Serial Number: 1719b321f50abf09edb2 00:23:56.380 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:56.380 Firmware Version: 6.7.0-68 00:23:56.380 Recommended Arb Burst: 6 00:23:56.380 IEEE OUI Identifier: 00 00 00 00:23:56.380 Multi-path I/O 00:23:56.380 May have multiple subsystem ports: Yes 00:23:56.380 May have multiple controllers: Yes 00:23:56.380 Associated with SR-IOV VF: No 00:23:56.380 Max Data Transfer Size: Unlimited 00:23:56.380 Max Number of Namespaces: 1024 00:23:56.380 Max Number of I/O Queues: 128 00:23:56.380 NVMe Specification Version (VS): 1.3 00:23:56.380 NVMe Specification Version (Identify): 1.3 00:23:56.380 Maximum Queue Entries: 1024 00:23:56.380 Contiguous Queues Required: No 00:23:56.380 Arbitration Mechanisms Supported 00:23:56.380 Weighted Round Robin: Not Supported 00:23:56.380 Vendor Specific: Not Supported 00:23:56.380 Reset Timeout: 7500 ms 00:23:56.380 Doorbell Stride: 4 bytes 00:23:56.380 NVM Subsystem Reset: Not Supported 00:23:56.380 Command Sets Supported 00:23:56.380 NVM Command Set: Supported 00:23:56.380 Boot Partition: Not Supported 00:23:56.380 Memory Page Size Minimum: 4096 bytes 00:23:56.380 Memory Page Size Maximum: 4096 bytes 00:23:56.380 Persistent Memory Region: Not Supported 00:23:56.380 Optional Asynchronous Events Supported 00:23:56.380 Namespace Attribute Notices: Supported 00:23:56.380 Firmware Activation Notices: Not Supported 00:23:56.380 ANA Change Notices: Supported 00:23:56.380 PLE Aggregate Log Change Notices: Not Supported 00:23:56.380 LBA Status Info Alert Notices: Not Supported 00:23:56.380 EGE Aggregate Log Change Notices: Not Supported 00:23:56.380 Normal NVM Subsystem Shutdown event: Not Supported 00:23:56.380 Zone Descriptor Change Notices: Not Supported 00:23:56.380 Discovery Log Change Notices: Not Supported 00:23:56.380 Controller Attributes 00:23:56.380 128-bit Host Identifier: Supported 00:23:56.380 Non-Operational Permissive Mode: Not Supported 00:23:56.380 NVM Sets: Not Supported 00:23:56.380 Read Recovery Levels: Not Supported 00:23:56.380 Endurance Groups: Not Supported 00:23:56.380 Predictable Latency Mode: Not Supported 00:23:56.380 Traffic Based Keep ALive: Supported 00:23:56.380 Namespace Granularity: Not Supported 00:23:56.380 SQ Associations: Not Supported 00:23:56.380 UUID List: Not Supported 00:23:56.380 Multi-Domain Subsystem: Not Supported 00:23:56.380 Fixed Capacity Management: Not Supported 00:23:56.380 Variable Capacity Management: Not Supported 00:23:56.380 Delete Endurance Group: Not Supported 00:23:56.380 Delete NVM Set: Not Supported 00:23:56.380 Extended LBA Formats Supported: Not Supported 00:23:56.380 Flexible Data Placement Supported: Not Supported 00:23:56.380 00:23:56.380 Controller Memory Buffer Support 00:23:56.380 ================================ 00:23:56.380 Supported: No 00:23:56.380 00:23:56.380 Persistent Memory Region Support 00:23:56.380 ================================ 00:23:56.380 Supported: No 00:23:56.380 00:23:56.380 Admin Command Set Attributes 00:23:56.380 ============================ 00:23:56.380 Security Send/Receive: Not Supported 00:23:56.380 Format NVM: Not Supported 00:23:56.380 Firmware Activate/Download: Not Supported 00:23:56.380 Namespace Management: Not Supported 00:23:56.380 Device Self-Test: Not Supported 00:23:56.380 Directives: Not Supported 00:23:56.380 NVMe-MI: Not Supported 00:23:56.380 Virtualization Management: Not Supported 00:23:56.380 Doorbell Buffer Config: Not Supported 00:23:56.380 Get LBA Status Capability: Not Supported 00:23:56.380 Command & Feature Lockdown Capability: Not Supported 00:23:56.380 Abort Command Limit: 4 00:23:56.380 Async Event Request Limit: 4 00:23:56.380 Number of Firmware Slots: N/A 00:23:56.380 Firmware Slot 1 Read-Only: N/A 00:23:56.380 Firmware Activation Without Reset: N/A 00:23:56.380 Multiple Update Detection Support: N/A 00:23:56.380 Firmware Update Granularity: No Information Provided 00:23:56.380 Per-Namespace SMART Log: Yes 00:23:56.380 Asymmetric Namespace Access Log Page: Supported 00:23:56.380 ANA Transition Time : 10 sec 00:23:56.380 00:23:56.380 Asymmetric Namespace Access Capabilities 00:23:56.380 ANA Optimized State : Supported 00:23:56.380 ANA Non-Optimized State : Supported 00:23:56.381 ANA Inaccessible State : Supported 00:23:56.381 ANA Persistent Loss State : Supported 00:23:56.381 ANA Change State : Supported 00:23:56.381 ANAGRPID is not changed : No 00:23:56.381 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:56.381 00:23:56.381 ANA Group Identifier Maximum : 128 00:23:56.381 Number of ANA Group Identifiers : 128 00:23:56.381 Max Number of Allowed Namespaces : 1024 00:23:56.381 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:56.381 Command Effects Log Page: Supported 00:23:56.381 Get Log Page Extended Data: Supported 00:23:56.381 Telemetry Log Pages: Not Supported 00:23:56.381 Persistent Event Log Pages: Not Supported 00:23:56.381 Supported Log Pages Log Page: May Support 00:23:56.381 Commands Supported & Effects Log Page: Not Supported 00:23:56.381 Feature Identifiers & Effects Log Page:May Support 00:23:56.381 NVMe-MI Commands & Effects Log Page: May Support 00:23:56.381 Data Area 4 for Telemetry Log: Not Supported 00:23:56.381 Error Log Page Entries Supported: 128 00:23:56.381 Keep Alive: Supported 00:23:56.381 Keep Alive Granularity: 1000 ms 00:23:56.381 00:23:56.381 NVM Command Set Attributes 00:23:56.381 ========================== 00:23:56.381 Submission Queue Entry Size 00:23:56.381 Max: 64 00:23:56.381 Min: 64 00:23:56.381 Completion Queue Entry Size 00:23:56.381 Max: 16 00:23:56.381 Min: 16 00:23:56.381 Number of Namespaces: 1024 00:23:56.381 Compare Command: Not Supported 00:23:56.381 Write Uncorrectable Command: Not Supported 00:23:56.381 Dataset Management Command: Supported 00:23:56.381 Write Zeroes Command: Supported 00:23:56.381 Set Features Save Field: Not Supported 00:23:56.381 Reservations: Not Supported 00:23:56.381 Timestamp: Not Supported 00:23:56.381 Copy: Not Supported 00:23:56.381 Volatile Write Cache: Present 00:23:56.381 Atomic Write Unit (Normal): 1 00:23:56.381 Atomic Write Unit (PFail): 1 00:23:56.381 Atomic Compare & Write Unit: 1 00:23:56.381 Fused Compare & Write: Not Supported 00:23:56.381 Scatter-Gather List 00:23:56.381 SGL Command Set: Supported 00:23:56.381 SGL Keyed: Not Supported 00:23:56.381 SGL Bit Bucket Descriptor: Not Supported 00:23:56.381 SGL Metadata Pointer: Not Supported 00:23:56.381 Oversized SGL: Not Supported 00:23:56.381 SGL Metadata Address: Not Supported 00:23:56.381 SGL Offset: Supported 00:23:56.381 Transport SGL Data Block: Not Supported 00:23:56.381 Replay Protected Memory Block: Not Supported 00:23:56.381 00:23:56.381 Firmware Slot Information 00:23:56.381 ========================= 00:23:56.381 Active slot: 0 00:23:56.381 00:23:56.381 Asymmetric Namespace Access 00:23:56.381 =========================== 00:23:56.381 Change Count : 0 00:23:56.381 Number of ANA Group Descriptors : 1 00:23:56.381 ANA Group Descriptor : 0 00:23:56.381 ANA Group ID : 1 00:23:56.381 Number of NSID Values : 1 00:23:56.381 Change Count : 0 00:23:56.381 ANA State : 1 00:23:56.381 Namespace Identifier : 1 00:23:56.381 00:23:56.381 Commands Supported and Effects 00:23:56.381 ============================== 00:23:56.381 Admin Commands 00:23:56.381 -------------- 00:23:56.381 Get Log Page (02h): Supported 00:23:56.381 Identify (06h): Supported 00:23:56.381 Abort (08h): Supported 00:23:56.381 Set Features (09h): Supported 00:23:56.381 Get Features (0Ah): Supported 00:23:56.381 Asynchronous Event Request (0Ch): Supported 00:23:56.381 Keep Alive (18h): Supported 00:23:56.381 I/O Commands 00:23:56.381 ------------ 00:23:56.381 Flush (00h): Supported 00:23:56.381 Write (01h): Supported LBA-Change 00:23:56.381 Read (02h): Supported 00:23:56.381 Write Zeroes (08h): Supported LBA-Change 00:23:56.381 Dataset Management (09h): Supported 00:23:56.381 00:23:56.381 Error Log 00:23:56.381 ========= 00:23:56.381 Entry: 0 00:23:56.381 Error Count: 0x3 00:23:56.381 Submission Queue Id: 0x0 00:23:56.381 Command Id: 0x5 00:23:56.381 Phase Bit: 0 00:23:56.381 Status Code: 0x2 00:23:56.381 Status Code Type: 0x0 00:23:56.381 Do Not Retry: 1 00:23:56.381 Error Location: 0x28 00:23:56.381 LBA: 0x0 00:23:56.381 Namespace: 0x0 00:23:56.381 Vendor Log Page: 0x0 00:23:56.381 ----------- 00:23:56.381 Entry: 1 00:23:56.381 Error Count: 0x2 00:23:56.381 Submission Queue Id: 0x0 00:23:56.381 Command Id: 0x5 00:23:56.381 Phase Bit: 0 00:23:56.381 Status Code: 0x2 00:23:56.381 Status Code Type: 0x0 00:23:56.381 Do Not Retry: 1 00:23:56.381 Error Location: 0x28 00:23:56.381 LBA: 0x0 00:23:56.381 Namespace: 0x0 00:23:56.381 Vendor Log Page: 0x0 00:23:56.381 ----------- 00:23:56.381 Entry: 2 00:23:56.381 Error Count: 0x1 00:23:56.381 Submission Queue Id: 0x0 00:23:56.381 Command Id: 0x4 00:23:56.381 Phase Bit: 0 00:23:56.381 Status Code: 0x2 00:23:56.381 Status Code Type: 0x0 00:23:56.381 Do Not Retry: 1 00:23:56.381 Error Location: 0x28 00:23:56.381 LBA: 0x0 00:23:56.381 Namespace: 0x0 00:23:56.381 Vendor Log Page: 0x0 00:23:56.381 00:23:56.381 Number of Queues 00:23:56.381 ================ 00:23:56.381 Number of I/O Submission Queues: 128 00:23:56.381 Number of I/O Completion Queues: 128 00:23:56.381 00:23:56.381 ZNS Specific Controller Data 00:23:56.381 ============================ 00:23:56.381 Zone Append Size Limit: 0 00:23:56.381 00:23:56.381 00:23:56.381 Active Namespaces 00:23:56.381 ================= 00:23:56.381 get_feature(0x05) failed 00:23:56.381 Namespace ID:1 00:23:56.381 Command Set Identifier: NVM (00h) 00:23:56.381 Deallocate: Supported 00:23:56.381 Deallocated/Unwritten Error: Not Supported 00:23:56.381 Deallocated Read Value: Unknown 00:23:56.381 Deallocate in Write Zeroes: Not Supported 00:23:56.381 Deallocated Guard Field: 0xFFFF 00:23:56.381 Flush: Supported 00:23:56.381 Reservation: Not Supported 00:23:56.381 Namespace Sharing Capabilities: Multiple Controllers 00:23:56.381 Size (in LBAs): 1953525168 (931GiB) 00:23:56.381 Capacity (in LBAs): 1953525168 (931GiB) 00:23:56.381 Utilization (in LBAs): 1953525168 (931GiB) 00:23:56.381 UUID: e295d235-3609-4711-9b25-472005b8987a 00:23:56.381 Thin Provisioning: Not Supported 00:23:56.381 Per-NS Atomic Units: Yes 00:23:56.381 Atomic Boundary Size (Normal): 0 00:23:56.381 Atomic Boundary Size (PFail): 0 00:23:56.381 Atomic Boundary Offset: 0 00:23:56.381 NGUID/EUI64 Never Reused: No 00:23:56.381 ANA group ID: 1 00:23:56.381 Namespace Write Protected: No 00:23:56.381 Number of LBA Formats: 1 00:23:56.381 Current LBA Format: LBA Format #00 00:23:56.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:56.382 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.382 rmmod nvme_tcp 00:23:56.382 rmmod nvme_fabrics 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.382 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:58.285 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:58.543 11:50:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.918 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:59.918 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:59.918 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:00.851 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:00.851 00:24:00.851 real 0m9.951s 00:24:00.851 user 0m2.177s 00:24:00.851 sys 0m3.675s 00:24:00.851 11:50:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.851 11:50:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.851 ************************************ 00:24:00.851 END TEST nvmf_identify_kernel_target 00:24:00.851 ************************************ 00:24:00.851 11:50:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.851 11:50:08 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.851 11:50:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.851 11:50:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.851 11:50:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.851 ************************************ 00:24:00.851 START TEST nvmf_auth_host 00:24:00.851 ************************************ 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:00.851 * Looking for test storage... 00:24:00.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.851 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:01.108 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.109 11:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:03.010 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:03.010 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:03.010 Found net devices under 0000:84:00.0: cvl_0_0 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:03.010 Found net devices under 0000:84:00.1: cvl_0_1 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.010 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.011 11:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:03.283 00:24:03.283 --- 10.0.0.2 ping statistics --- 00:24:03.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.283 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:03.283 00:24:03.283 --- 10.0.0.1 ping statistics --- 00:24:03.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.283 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3112767 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3112767 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3112767 ']' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.283 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:03.541 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b81402ea09876e396d33df345fe4d83b 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.N4n 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b81402ea09876e396d33df345fe4d83b 0 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b81402ea09876e396d33df345fe4d83b 0 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b81402ea09876e396d33df345fe4d83b 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.N4n 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.N4n 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.N4n 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a0f7e758eb5f33b4eea8c9397aa6134217314d555c8d128d15bf6f61dc7b16a2 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Tgi 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a0f7e758eb5f33b4eea8c9397aa6134217314d555c8d128d15bf6f61dc7b16a2 3 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a0f7e758eb5f33b4eea8c9397aa6134217314d555c8d128d15bf6f61dc7b16a2 3 00:24:03.542 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a0f7e758eb5f33b4eea8c9397aa6134217314d555c8d128d15bf6f61dc7b16a2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Tgi 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Tgi 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Tgi 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac772464872fbb943218a86cef912d5e6ad80fcdbe98d241 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.48C 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac772464872fbb943218a86cef912d5e6ad80fcdbe98d241 0 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac772464872fbb943218a86cef912d5e6ad80fcdbe98d241 0 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac772464872fbb943218a86cef912d5e6ad80fcdbe98d241 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.48C 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.48C 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.48C 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a23a54451a2834eff1ec47694a7c6f88710baaf68dc34027 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Et2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a23a54451a2834eff1ec47694a7c6f88710baaf68dc34027 2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a23a54451a2834eff1ec47694a7c6f88710baaf68dc34027 2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a23a54451a2834eff1ec47694a7c6f88710baaf68dc34027 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Et2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Et2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Et2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4f7c546557ce6d81224273cec1c9c0e 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JDI 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4f7c546557ce6d81224273cec1c9c0e 1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4f7c546557ce6d81224273cec1c9c0e 1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4f7c546557ce6d81224273cec1c9c0e 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JDI 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JDI 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.JDI 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5926e6086afd6ebaf20a1191060d93e5 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KLK 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5926e6086afd6ebaf20a1191060d93e5 1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5926e6086afd6ebaf20a1191060d93e5 1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5926e6086afd6ebaf20a1191060d93e5 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KLK 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KLK 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.KLK 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=72a8b0d9bf2a1c09362723e82883f76b3825115dcb4b84b2 00:24:03.800 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XF7 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 72a8b0d9bf2a1c09362723e82883f76b3825115dcb4b84b2 2 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 72a8b0d9bf2a1c09362723e82883f76b3825115dcb4b84b2 2 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=72a8b0d9bf2a1c09362723e82883f76b3825115dcb4b84b2 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:03.801 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XF7 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XF7 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XF7 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1fd72dcccf46b73ae3ffa3c6eb2732b 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.S61 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1fd72dcccf46b73ae3ffa3c6eb2732b 0 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1fd72dcccf46b73ae3ffa3c6eb2732b 0 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1fd72dcccf46b73ae3ffa3c6eb2732b 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.S61 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.S61 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.S61 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35a2eb97397c3adad64301d659752afc695be6bf54499fe5c2bb7d88eb22549d 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xbm 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35a2eb97397c3adad64301d659752afc695be6bf54499fe5c2bb7d88eb22549d 3 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35a2eb97397c3adad64301d659752afc695be6bf54499fe5c2bb7d88eb22549d 3 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35a2eb97397c3adad64301d659752afc695be6bf54499fe5c2bb7d88eb22549d 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xbm 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xbm 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Xbm 00:24:04.058 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3112767 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3112767 ']' 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.059 11:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.N4n 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Tgi ]] 00:24:04.316 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Tgi 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.48C 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Et2 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Et2 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JDI 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.KLK ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KLK 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XF7 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.S61 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.S61 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Xbm 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:04.317 11:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:05.684 Waiting for block devices as requested 00:24:05.684 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:24:05.684 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:05.684 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:05.684 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:05.941 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:05.941 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:05.941 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:05.941 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:06.198 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:06.198 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:06.198 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:06.198 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:06.455 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:06.455 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:06.455 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:06.455 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:06.711 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:06.969 No valid GPT data, bailing 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:06.969 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:07.226 11:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:24:07.226 00:24:07.226 Discovery Log Number of Records 2, Generation counter 2 00:24:07.226 =====Discovery Log Entry 0====== 00:24:07.226 trtype: tcp 00:24:07.226 adrfam: ipv4 00:24:07.226 subtype: current discovery subsystem 00:24:07.226 treq: not specified, sq flow control disable supported 00:24:07.226 portid: 1 00:24:07.226 trsvcid: 4420 00:24:07.226 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:07.226 traddr: 10.0.0.1 00:24:07.226 eflags: none 00:24:07.226 sectype: none 00:24:07.226 =====Discovery Log Entry 1====== 00:24:07.226 trtype: tcp 00:24:07.226 adrfam: ipv4 00:24:07.226 subtype: nvme subsystem 00:24:07.226 treq: not specified, sq flow control disable supported 00:24:07.226 portid: 1 00:24:07.226 trsvcid: 4420 00:24:07.226 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:07.226 traddr: 10.0.0.1 00:24:07.226 eflags: none 00:24:07.226 sectype: none 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.226 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.227 nvme0n1 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.227 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:07.484 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.485 nvme0n1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.485 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.743 nvme0n1 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.743 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.000 nvme0n1 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.000 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.258 nvme0n1 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.258 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.259 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.515 nvme0n1 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:08.515 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.516 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.773 nvme0n1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.773 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.030 nvme0n1 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.030 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.031 11:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.288 nvme0n1 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.288 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.545 nvme0n1 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.545 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.802 nvme0n1 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.802 11:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.059 nvme0n1 00:24:10.059 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.316 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.317 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.317 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.317 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.574 nvme0n1 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.574 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.831 nvme0n1 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.831 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.088 11:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.346 nvme0n1 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.346 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.604 nvme0n1 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.604 11:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.171 nvme0n1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.171 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.736 nvme0n1 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.736 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.994 11:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.560 nvme0n1 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.560 11:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.561 11:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.561 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.561 11:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.127 nvme0n1 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.127 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 nvme0n1 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.693 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.952 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.884 nvme0n1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.884 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.817 nvme0n1 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.817 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.751 nvme0n1 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.751 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.752 11:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.685 nvme0n1 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.685 11:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.619 nvme0n1 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.619 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.877 nvme0n1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.877 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.878 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.878 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.136 nvme0n1 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.136 11:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.136 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.395 nvme0n1 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.395 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 nvme0n1 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.654 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.655 nvme0n1 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.655 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.913 nvme0n1 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.913 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.172 11:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 nvme0n1 00:24:21.172 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.172 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.172 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.172 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.431 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.688 nvme0n1 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.688 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.945 nvme0n1 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.945 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.203 nvme0n1 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.203 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.203 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.461 nvme0n1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.462 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.028 nvme0n1 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.028 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.029 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.286 nvme0n1 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.287 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.886 nvme0n1 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.886 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.145 nvme0n1 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.145 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.146 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.711 nvme0n1 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.711 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.712 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.278 nvme0n1 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.278 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.843 nvme0n1 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.843 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.844 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.409 nvme0n1 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.409 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.974 nvme0n1 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.974 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.232 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.166 nvme0n1 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.166 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.167 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.101 nvme0n1 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.101 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.035 nvme0n1 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.035 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.967 nvme0n1 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.967 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.901 nvme0n1 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.901 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.159 nvme0n1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.159 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.415 nvme0n1 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.415 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.416 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.672 nvme0n1 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.672 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.673 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.930 nvme0n1 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.930 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.188 nvme0n1 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.188 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.188 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.446 nvme0n1 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:33.446 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.447 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.704 nvme0n1 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.704 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.705 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.962 nvme0n1 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.962 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.963 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.220 nvme0n1 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.220 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.478 nvme0n1 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.478 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.736 nvme0n1 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.736 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.994 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.252 nvme0n1 00:24:35.252 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.252 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.252 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.252 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.252 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:35.252 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.253 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.510 nvme0n1 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.510 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.511 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.769 nvme0n1 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.769 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.026 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.026 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.026 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.026 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.027 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.285 nvme0n1 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.285 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.850 nvme0n1 00:24:36.850 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.851 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.417 nvme0n1 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.417 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.984 nvme0n1 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.984 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.242 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.242 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.242 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.242 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.242 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.808 nvme0n1 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.808 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.809 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.374 nvme0n1 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxNDAyZWEwOTg3NmUzOTZkMzNkZjM0NWZlNGQ4M2IIdk86: 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTBmN2U3NThlYjVmMzNiNGVlYThjOTM5N2FhNjEzNDIxNzMxNGQ1NTVjOGQxMjhkMTViZjZmNjFkYzdiMTZhMqYRZvA=: 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.374 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.307 nvme0n1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.307 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.241 nvme0n1 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRmN2M1NDY1NTdjZTZkODEyMjQyNzNjZWMxYzljMGUmnefb: 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTkyNmU2MDg2YWZkNmViYWYyMGExMTkxMDYwZDkzZTWSPpxK: 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.241 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.176 nvme0n1 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJhOGIwZDliZjJhMWMwOTM2MjcyM2U4Mjg4M2Y3NmIzODI1MTE1ZGNiNGI4NGIyrDjhjw==: 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZDcyZGNjY2Y0NmI3M2FlM2ZmYTNjNmViMjczMmKGKgQ2: 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.176 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 nvme0n1 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.144 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzVhMmViOTczOTdjM2FkYWQ2NDMwMWQ2NTk3NTJhZmM2OTViZTZiZjU0NDk5ZmU1YzJiYjdkODhlYjIyNTQ5ZJjFn7U=: 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.402 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.338 nvme0n1 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWM3NzI0NjQ4NzJmYmI5NDMyMThhODZjZWY5MTJkNWU2YWQ4MGZjZGJlOThkMjQxaX01/Q==: 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: ]] 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTIzYTU0NDUxYTI4MzRlZmYxZWM0NzY5NGE3YzZmODg3MTBiYWFmNjhkYzM0MDI3lNNHWw==: 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.338 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.339 request: 00:24:44.339 { 00:24:44.339 "name": "nvme0", 00:24:44.339 "trtype": "tcp", 00:24:44.339 "traddr": "10.0.0.1", 00:24:44.339 "adrfam": "ipv4", 00:24:44.339 "trsvcid": "4420", 00:24:44.339 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.339 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.339 "prchk_reftag": false, 00:24:44.339 "prchk_guard": false, 00:24:44.339 "hdgst": false, 00:24:44.339 "ddgst": false, 00:24:44.339 "method": "bdev_nvme_attach_controller", 00:24:44.339 "req_id": 1 00:24:44.339 } 00:24:44.339 Got JSON-RPC error response 00:24:44.339 response: 00:24:44.339 { 00:24:44.339 "code": -5, 00:24:44.339 "message": "Input/output error" 00:24:44.339 } 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.339 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.598 request: 00:24:44.598 { 00:24:44.598 "name": "nvme0", 00:24:44.598 "trtype": "tcp", 00:24:44.598 "traddr": "10.0.0.1", 00:24:44.598 "adrfam": "ipv4", 00:24:44.598 "trsvcid": "4420", 00:24:44.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.598 "prchk_reftag": false, 00:24:44.598 "prchk_guard": false, 00:24:44.598 "hdgst": false, 00:24:44.598 "ddgst": false, 00:24:44.598 "dhchap_key": "key2", 00:24:44.598 "method": "bdev_nvme_attach_controller", 00:24:44.599 "req_id": 1 00:24:44.599 } 00:24:44.599 Got JSON-RPC error response 00:24:44.599 response: 00:24:44.599 { 00:24:44.599 "code": -5, 00:24:44.599 "message": "Input/output error" 00:24:44.599 } 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.599 request: 00:24:44.599 { 00:24:44.599 "name": "nvme0", 00:24:44.599 "trtype": "tcp", 00:24:44.599 "traddr": "10.0.0.1", 00:24:44.599 "adrfam": "ipv4", 00:24:44.599 "trsvcid": "4420", 00:24:44.599 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.599 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.599 "prchk_reftag": false, 00:24:44.599 "prchk_guard": false, 00:24:44.599 "hdgst": false, 00:24:44.599 "ddgst": false, 00:24:44.599 "dhchap_key": "key1", 00:24:44.599 "dhchap_ctrlr_key": "ckey2", 00:24:44.599 "method": "bdev_nvme_attach_controller", 00:24:44.599 "req_id": 1 00:24:44.599 } 00:24:44.599 Got JSON-RPC error response 00:24:44.599 response: 00:24:44.599 { 00:24:44.599 "code": -5, 00:24:44.599 "message": "Input/output error" 00:24:44.599 } 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.599 rmmod nvme_tcp 00:24:44.599 rmmod nvme_fabrics 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3112767 ']' 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3112767 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3112767 ']' 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3112767 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3112767 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3112767' 00:24:44.599 killing process with pid 3112767 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3112767 00:24:44.599 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3112767 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.858 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:46.813 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:47.072 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:47.072 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:47.072 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:47.072 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:47.072 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:47.073 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:47.073 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:48.446 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.446 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.446 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:49.382 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:49.382 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.N4n /tmp/spdk.key-null.48C /tmp/spdk.key-sha256.JDI /tmp/spdk.key-sha384.XF7 /tmp/spdk.key-sha512.Xbm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:49.382 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:50.756 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:50.756 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:50.756 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:50.756 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:50.756 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:50.756 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:50.756 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:50.756 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:50.756 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:50.756 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:50.757 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:50.757 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:50.757 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:50.757 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:50.757 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:50.757 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:50.757 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:50.757 00:24:50.757 real 0m49.878s 00:24:50.757 user 0m47.200s 00:24:50.757 sys 0m5.940s 00:24:50.757 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.757 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.757 ************************************ 00:24:50.757 END TEST nvmf_auth_host 00:24:50.757 ************************************ 00:24:50.757 11:50:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.757 11:50:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:50.757 11:50:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:50.757 11:50:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.757 11:50:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.757 11:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.757 ************************************ 00:24:50.757 START TEST nvmf_digest 00:24:50.757 ************************************ 00:24:50.757 11:50:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:51.014 * Looking for test storage... 00:24:51.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.014 11:50:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.015 11:50:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:52.910 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:52.910 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.910 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:52.911 Found net devices under 0000:84:00.0: cvl_0_0 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:52.911 Found net devices under 0000:84:00.1: cvl_0_1 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.911 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:24:53.168 00:24:53.168 --- 10.0.0.2 ping statistics --- 00:24:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.168 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:53.168 00:24:53.168 --- 10.0.0.1 ping statistics --- 00:24:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.168 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.168 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.169 11:51:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 ************************************ 00:24:53.169 START TEST nvmf_digest_clean 00:24:53.169 ************************************ 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3122319 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3122319 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3122319 ']' 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.169 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 [2024-07-15 11:51:01.070677] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:53.169 [2024-07-15 11:51:01.070793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.169 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.169 [2024-07-15 11:51:01.137948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.426 [2024-07-15 11:51:01.251169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.426 [2024-07-15 11:51:01.251223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.426 [2024-07-15 11:51:01.251237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.426 [2024-07-15 11:51:01.251248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.426 [2024-07-15 11:51:01.251257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.426 [2024-07-15 11:51:01.251284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.426 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:53.426 null0 00:24:53.426 [2024-07-15 11:51:01.413480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.684 [2024-07-15 11:51:01.437664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3122372 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3122372 /var/tmp/bperf.sock 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3122372 ']' 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.684 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:53.684 [2024-07-15 11:51:01.482971] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:53.684 [2024-07-15 11:51:01.483077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122372 ] 00:24:53.684 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.684 [2024-07-15 11:51:01.543491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.684 [2024-07-15 11:51:01.659211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.941 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.941 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:53.942 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:53.942 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:53.942 11:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:54.200 11:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.200 11:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.456 nvme0n1 00:24:54.456 11:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:54.456 11:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.714 Running I/O for 2 seconds... 00:24:56.643 00:24:56.643 Latency(us) 00:24:56.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:56.643 nvme0n1 : 2.01 20342.24 79.46 0.00 0.00 6285.40 3070.48 14951.92 00:24:56.643 =================================================================================================================== 00:24:56.643 Total : 20342.24 79.46 0.00 0.00 6285.40 3070.48 14951.92 00:24:56.643 0 00:24:56.643 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:56.643 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:56.643 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:56.643 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:56.643 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:56.643 | select(.opcode=="crc32c") 00:24:56.643 | "\(.module_name) \(.executed)"' 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3122372 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3122372 ']' 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3122372 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3122372 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3122372' 00:24:56.900 killing process with pid 3122372 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3122372 00:24:56.900 Received shutdown signal, test time was about 2.000000 seconds 00:24:56.900 00:24:56.900 Latency(us) 00:24:56.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.900 =================================================================================================================== 00:24:56.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.900 11:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3122372 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3122803 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3122803 /var/tmp/bperf.sock 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3122803 ']' 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.158 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:57.158 [2024-07-15 11:51:05.067952] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:57.158 [2024-07-15 11:51:05.068050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122803 ] 00:24:57.158 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.158 Zero copy mechanism will not be used. 00:24:57.158 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.158 [2024-07-15 11:51:05.133037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.416 [2024-07-15 11:51:05.245496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.416 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.416 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:57.416 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:57.416 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:57.416 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:57.674 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.674 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.239 nvme0n1 00:24:58.239 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:58.239 11:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:58.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.239 Zero copy mechanism will not be used. 00:24:58.239 Running I/O for 2 seconds... 00:25:00.138 00:25:00.138 Latency(us) 00:25:00.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.138 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:00.138 nvme0n1 : 2.00 4845.47 605.68 0.00 0.00 3298.40 782.79 11553.75 00:25:00.138 =================================================================================================================== 00:25:00.138 Total : 4845.47 605.68 0.00 0.00 3298.40 782.79 11553.75 00:25:00.138 0 00:25:00.138 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:00.138 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:00.138 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:00.138 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:00.138 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:00.138 | select(.opcode=="crc32c") 00:25:00.138 | "\(.module_name) \(.executed)"' 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3122803 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3122803 ']' 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3122803 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3122803 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3122803' 00:25:00.396 killing process with pid 3122803 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3122803 00:25:00.396 Received shutdown signal, test time was about 2.000000 seconds 00:25:00.396 00:25:00.396 Latency(us) 00:25:00.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.396 =================================================================================================================== 00:25:00.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.396 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3122803 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3123642 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3123642 /var/tmp/bperf.sock 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3123642 ']' 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.654 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.913 [2024-07-15 11:51:08.679329] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:00.913 [2024-07-15 11:51:08.679415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123642 ] 00:25:00.913 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.913 [2024-07-15 11:51:08.740108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.913 [2024-07-15 11:51:08.850762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.913 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.913 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:00.913 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:00.913 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:00.913 11:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.479 11:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.479 11:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.737 nvme0n1 00:25:01.737 11:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:01.737 11:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.737 Running I/O for 2 seconds... 00:25:04.262 00:25:04.262 Latency(us) 00:25:04.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:04.263 nvme0n1 : 2.00 23113.36 90.29 0.00 0.00 5529.44 2402.99 13107.20 00:25:04.263 =================================================================================================================== 00:25:04.263 Total : 23113.36 90.29 0.00 0.00 5529.44 2402.99 13107.20 00:25:04.263 0 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:04.263 | select(.opcode=="crc32c") 00:25:04.263 | "\(.module_name) \(.executed)"' 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3123642 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3123642 ']' 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3123642 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:04.263 11:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3123642 00:25:04.263 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:04.263 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:04.263 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3123642' 00:25:04.263 killing process with pid 3123642 00:25:04.263 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3123642 00:25:04.263 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.263 00:25:04.263 Latency(us) 00:25:04.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.263 =================================================================================================================== 00:25:04.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.263 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3123642 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3124238 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3124238 /var/tmp/bperf.sock 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3124238 ']' 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.521 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:04.521 [2024-07-15 11:51:12.317060] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:04.521 [2024-07-15 11:51:12.317148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124238 ] 00:25:04.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.521 Zero copy mechanism will not be used. 00:25:04.521 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.521 [2024-07-15 11:51:12.381949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.521 [2024-07-15 11:51:12.494911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.778 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.778 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:04.778 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:04.778 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:04.778 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:05.036 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.036 11:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.293 nvme0n1 00:25:05.293 11:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.293 11:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.293 Zero copy mechanism will not be used. 00:25:05.293 Running I/O for 2 seconds... 00:25:07.819 00:25:07.819 Latency(us) 00:25:07.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.819 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:07.819 nvme0n1 : 2.00 4968.13 621.02 0.00 0.00 3213.13 2500.08 6140.97 00:25:07.819 =================================================================================================================== 00:25:07.819 Total : 4968.13 621.02 0.00 0.00 3213.13 2500.08 6140.97 00:25:07.819 0 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.819 | select(.opcode=="crc32c") 00:25:07.819 | "\(.module_name) \(.executed)"' 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:07.819 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3124238 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3124238 ']' 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3124238 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124238 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124238' 00:25:07.820 killing process with pid 3124238 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3124238 00:25:07.820 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.820 00:25:07.820 Latency(us) 00:25:07.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.820 =================================================================================================================== 00:25:07.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.820 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3124238 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3122319 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3122319 ']' 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3122319 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3122319 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3122319' 00:25:08.077 killing process with pid 3122319 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3122319 00:25:08.077 11:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3122319 00:25:08.336 00:25:08.336 real 0m15.108s 00:25:08.336 user 0m29.240s 00:25:08.336 sys 0m4.948s 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.336 ************************************ 00:25:08.336 END TEST nvmf_digest_clean 00:25:08.336 ************************************ 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.336 ************************************ 00:25:08.336 START TEST nvmf_digest_error 00:25:08.336 ************************************ 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3124677 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3124677 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3124677 ']' 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.336 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.336 [2024-07-15 11:51:16.230840] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:08.336 [2024-07-15 11:51:16.230922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.336 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.336 [2024-07-15 11:51:16.297911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.594 [2024-07-15 11:51:16.409898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.594 [2024-07-15 11:51:16.409953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.594 [2024-07-15 11:51:16.409967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.594 [2024-07-15 11:51:16.409978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.594 [2024-07-15 11:51:16.409987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.594 [2024-07-15 11:51:16.410027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.594 [2024-07-15 11:51:16.470523] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.594 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.594 null0 00:25:08.595 [2024-07-15 11:51:16.573879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.852 [2024-07-15 11:51:16.598111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3124822 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3124822 /var/tmp/bperf.sock 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3124822 ']' 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.852 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.852 [2024-07-15 11:51:16.642175] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:08.852 [2024-07-15 11:51:16.642253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124822 ] 00:25:08.852 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.852 [2024-07-15 11:51:16.700864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.852 [2024-07-15 11:51:16.805973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.111 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.111 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:09.111 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:09.111 11:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.369 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.626 nvme0n1 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:09.626 11:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.885 Running I/O for 2 seconds... 00:25:09.885 [2024-07-15 11:51:17.727716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.727783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.727803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.738484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.738512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.738544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.752044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.752073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.764147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.764177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.764207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.777094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.777121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.777136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.787712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.787762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.801088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.801125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.801157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.810659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.810685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.810716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.822625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.822652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.822682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.834822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.834850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.834880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.849665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.849692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.849722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.885 [2024-07-15 11:51:17.863901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:09.885 [2024-07-15 11:51:17.863929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.885 [2024-07-15 11:51:17.863960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.878856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.878884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.878915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.889160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.889187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.889217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.904489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.904516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.904546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.917699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.917727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.917770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.932296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.932324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.932354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.948087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.948147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.962933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.962960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.144 [2024-07-15 11:51:17.962990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.144 [2024-07-15 11:51:17.972997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.144 [2024-07-15 11:51:17.973024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:17.973040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:17.987574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:17.987600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:17.987630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.003002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.003029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.003059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.017995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.018053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.032923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.032951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.032988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.048383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.048410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.048441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.063184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.063211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.063241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.076956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.076984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.077016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.087489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.087516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.087546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.098507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.098534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.098563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.113204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.113231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.113261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.145 [2024-07-15 11:51:18.128272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.145 [2024-07-15 11:51:18.128299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.145 [2024-07-15 11:51:18.128330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.141116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.141143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.141173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.151150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.151182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.151214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.163893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.163924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.176066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.176109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.176124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.191826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.191855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.191886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.205597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.205625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.205655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.218408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.218436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.218467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.230001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.230030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.230062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.245318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.245346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.245377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.255610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.255638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.255668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.269182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.269241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.281750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.281777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.281809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.291843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.291872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.291905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.305635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.305663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.316920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.316948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.316979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.330625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.330653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.330683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.344518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.344578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.355703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.355753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.355771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.370516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.370544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.370580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.405 [2024-07-15 11:51:18.385522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.405 [2024-07-15 11:51:18.385550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.405 [2024-07-15 11:51:18.385581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.396027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.687 [2024-07-15 11:51:18.396060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.687 [2024-07-15 11:51:18.396077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.411300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.687 [2024-07-15 11:51:18.411328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.687 [2024-07-15 11:51:18.411359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.424405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.687 [2024-07-15 11:51:18.424434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.687 [2024-07-15 11:51:18.424465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.436415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.687 [2024-07-15 11:51:18.436443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.687 [2024-07-15 11:51:18.436474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.446346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.687 [2024-07-15 11:51:18.446375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.687 [2024-07-15 11:51:18.446407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.687 [2024-07-15 11:51:18.457842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.457871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.457902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.472956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.472985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.487499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.487534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.487565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.497441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.497469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.497500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.509530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.509558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.509590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.523163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.523190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.523221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.533308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.533336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.533366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.546319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.546346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.546377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.559186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.559214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.559244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.569070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.569097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.581872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.581901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.581932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.594196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.594223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.594255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.606552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.606610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.616853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.616882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.629276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.629305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.629336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.642105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.642165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.688 [2024-07-15 11:51:18.653224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.688 [2024-07-15 11:51:18.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.688 [2024-07-15 11:51:18.653290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.669183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.669212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.685752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.685809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.685826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.700192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.700221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.700258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.711335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.711363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.711394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.722490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.722517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.722547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.736311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.736340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.736371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.747513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.747541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.747572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.758258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.758286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.758317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.770395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.770453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.783008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.783052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.783068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.795350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.795378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.805879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.805907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.805940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.817659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.817686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.817717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.831993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.832021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.832052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.842435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.842462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.855297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.855325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.855356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.866733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.866769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.866802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.878036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.878073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.878105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.892500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.892528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.892559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.903173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.903201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.915858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.915886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.915918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.925941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.925969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.926001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.957 [2024-07-15 11:51:18.938376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:10.957 [2024-07-15 11:51:18.938405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.957 [2024-07-15 11:51:18.938445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:18.951029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:18.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:18.951088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:18.965316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:18.965344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:18.965374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:18.975434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:18.975462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:18.975493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:18.989104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:18.989131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:18.989162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.000029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.000058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.000074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.014552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.014585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.014617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.030428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.030456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.030487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.040321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.040349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.040379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.054745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.054773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.054804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.070048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.070106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.084845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.084873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.084905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.098083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.098109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.098139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.108459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.108486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.108516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.123898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.123926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.123957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.138866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.138893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.138923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.154176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.154203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.166250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.166276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.166306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.176497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.176523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.176553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.188136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.188163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.188192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.216 [2024-07-15 11:51:19.201604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.216 [2024-07-15 11:51:19.201631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.216 [2024-07-15 11:51:19.201662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.212640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.212667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.212697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.224674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.224700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.224731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.236522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.236549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.236584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.246424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.246450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.246480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.261545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.261572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.261601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.275657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.275683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.275713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.290581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.290608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.290637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.304862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.304890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.304921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.315315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.315343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.315373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.329540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.329567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.329598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.341626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.341653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.341683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.353223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.353255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.353285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.474 [2024-07-15 11:51:19.367169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.474 [2024-07-15 11:51:19.367197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.474 [2024-07-15 11:51:19.367227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.377898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.377925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.377956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.389803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.389860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.400951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.400977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.401007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.413215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.413241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.413271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.424551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.424578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.424608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.437398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.437426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.437457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.475 [2024-07-15 11:51:19.447081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.475 [2024-07-15 11:51:19.447123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.475 [2024-07-15 11:51:19.447138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.461667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.461694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.461723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.475109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.475136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.475166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.484287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.484314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.484344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.502464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.502492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.502523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.517118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.517146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.517179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.528810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.528840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.528872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.542401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.542428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.542459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.555158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.555185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.555215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.564963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.564993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.565030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.577989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.578016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.578047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.588464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.588491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.588521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.603041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.603069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.603103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.619230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.619258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.619288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.633675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.633702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.633731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.649178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.649205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.649235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.660258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.660285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.660315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.670935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.670962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.670992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.683576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.683603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.683633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.697133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.697160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.697190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.732 [2024-07-15 11:51:19.708192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x891280) 00:25:11.732 [2024-07-15 11:51:19.708219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.732 [2024-07-15 11:51:19.708249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.989 00:25:11.989 Latency(us) 00:25:11.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.989 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:11.989 nvme0n1 : 2.05 19524.64 76.27 0.00 0.00 6424.57 3228.25 45244.11 00:25:11.989 =================================================================================================================== 00:25:11.989 Total : 19524.64 76.27 0.00 0.00 6424.57 3228.25 45244.11 00:25:11.989 0 00:25:11.989 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:11.989 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:11.989 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:11.989 | .driver_specific 00:25:11.989 | .nvme_error 00:25:11.989 | .status_code 00:25:11.989 | .command_transient_transport_error' 00:25:11.989 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3124822 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3124822 ']' 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3124822 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124822 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:12.245 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:12.246 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124822' 00:25:12.246 killing process with pid 3124822 00:25:12.246 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3124822 00:25:12.246 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.246 00:25:12.246 Latency(us) 00:25:12.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.246 =================================================================================================================== 00:25:12.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.246 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3124822 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3125229 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3125229 /var/tmp/bperf.sock 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3125229 ']' 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:12.503 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.503 [2024-07-15 11:51:20.381496] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:12.503 [2024-07-15 11:51:20.381573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125229 ] 00:25:12.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:12.503 Zero copy mechanism will not be used. 00:25:12.503 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.503 [2024-07-15 11:51:20.440815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.760 [2024-07-15 11:51:20.550551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.760 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.760 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:12.760 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:12.760 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.017 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.580 nvme0n1 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:13.580 11:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.580 Zero copy mechanism will not be used. 00:25:13.580 Running I/O for 2 seconds... 00:25:13.580 [2024-07-15 11:51:21.524073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.580 [2024-07-15 11:51:21.524140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.580 [2024-07-15 11:51:21.524161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.580 [2024-07-15 11:51:21.532758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.580 [2024-07-15 11:51:21.532789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.580 [2024-07-15 11:51:21.532806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.580 [2024-07-15 11:51:21.541938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.580 [2024-07-15 11:51:21.541972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.580 [2024-07-15 11:51:21.542005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.580 [2024-07-15 11:51:21.551305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.580 [2024-07-15 11:51:21.551332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.580 [2024-07-15 11:51:21.551362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.580 [2024-07-15 11:51:21.560458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.580 [2024-07-15 11:51:21.560485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.580 [2024-07-15 11:51:21.560515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.570031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.570061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.570077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.579239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.579265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.579302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.588365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.588391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.588420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.597486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.597512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.597543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.606601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.606627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.606658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.615796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.615853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.624757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.624798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.624815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.633763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.633789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.633819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.642765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.642793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.642824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.651670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.651696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.651725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.661101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.661127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.661157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.670038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.670063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.670078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.679135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.679161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.679191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.688186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.839 [2024-07-15 11:51:21.688212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.839 [2024-07-15 11:51:21.688242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.839 [2024-07-15 11:51:21.697159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.697185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.706086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.706127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.706142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.715218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.715243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.715272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.724316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.724341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.724372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.733754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.733781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.743028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.743055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.743070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.751500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.751526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.751556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.757688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.757713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.757751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.763680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.763706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.763742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.769505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.769530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.769560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.775199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.775225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.775255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.781078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.781103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.781132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.786896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.786922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.786953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.792574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.792605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.798387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.798413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.798442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.804232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.804257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.804287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.810353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.810379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.840 [2024-07-15 11:51:21.817403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:13.840 [2024-07-15 11:51:21.817428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.840 [2024-07-15 11:51:21.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.825864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.825894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.825910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.833552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.833578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.833608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.839627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.839652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.839682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.845497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.845522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.845552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.851164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.851219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.856547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.856573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.856602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.862305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.862331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.862360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.868149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.868175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.868204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.874457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.874492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.874522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.881533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.881559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.881588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.888504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.888529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.888560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.894829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.894857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.894887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.902058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.902098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.902118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.908139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.908165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.908194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.914181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.914206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.914236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.920294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.920319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.920348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.927371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.927396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.927426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.935330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.099 [2024-07-15 11:51:21.935355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.099 [2024-07-15 11:51:21.935386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.099 [2024-07-15 11:51:21.944133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.944160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.944189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.953375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.953402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.953432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.962231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.962258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.962289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.972446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.972478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.972508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.981247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.981274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.981304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.990121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.990149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.990179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:21.999358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:21.999385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:21.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.008980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.009008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.009023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.018659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.018686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.018717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.027855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.027890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.027921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.037339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.037367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.037397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.046314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.046342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.046383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.054482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.054511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.054541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.064073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.064116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.064132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.073280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.073308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.073339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.100 [2024-07-15 11:51:22.081697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.100 [2024-07-15 11:51:22.081748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.100 [2024-07-15 11:51:22.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.090893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.090922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.099163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.099191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.099221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.107653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.107681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.107711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.117001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.117043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.125947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.125985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.133755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.133784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.141237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.141264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.141295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.148140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.148167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.155783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.155815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.163845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.163877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.171419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.171446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.171477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.179552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.179579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.179615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.187161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.187189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.187219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.194966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.195001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.202398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.202425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.202457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.209833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.209862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.209893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.217471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.217538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.225985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.226014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.226030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.231682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.231708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.231747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.238035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.238073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.238088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.244233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.244259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.249756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.249783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.249828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.359 [2024-07-15 11:51:22.255265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.359 [2024-07-15 11:51:22.255291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.359 [2024-07-15 11:51:22.255321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.261071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.261097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.261126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.267027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.267063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.272973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.273000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.273031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.278918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.278945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.278975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.284751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.284777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.284807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.290492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.290519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.290548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.296239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.296265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.296295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.302030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.302086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.302102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.307645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.307671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.307700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.313276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.313309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.313339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.318850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.318876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.318906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.322399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.322427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.322456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.327113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.327139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.327169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.332768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.332794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.332808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.338271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.338296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.338326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.360 [2024-07-15 11:51:22.344975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.360 [2024-07-15 11:51:22.345014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.360 [2024-07-15 11:51:22.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.351084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.356780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.356808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.362371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.362397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.368169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.368194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.374386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.374413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.374443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.380234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.380260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.380289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.386001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.619 [2024-07-15 11:51:22.386028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.619 [2024-07-15 11:51:22.386043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.619 [2024-07-15 11:51:22.391665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.391692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.391731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.397359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.397387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.397424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.403314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.403340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.403380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.409312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.409338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.409367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.415350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.415375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.415405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.421334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.421359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.421388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.427339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.427365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.427395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.433448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.433512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.439443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.439469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.439498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.445619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.445645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.445675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.451356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.451387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.451423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.457523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.457549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.457578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.463688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.463714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.463752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.470108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.470135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.470164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.477731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.477765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.477795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.485324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.492966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.492994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.493024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.500450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.500478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.500509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.508284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.508316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.508346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.516563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.516589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.516619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.524354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.524381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.524409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.620 [2024-07-15 11:51:22.531276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.620 [2024-07-15 11:51:22.531302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.620 [2024-07-15 11:51:22.531333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.539102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.539169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.546872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.546900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.546931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.554498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.554554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.562799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.562827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.562858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.570526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.570560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.570589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.578353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.578386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.578418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.585878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.585912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.585942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.592116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.592142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.592172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.598175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.598202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.598232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.621 [2024-07-15 11:51:22.604345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.621 [2024-07-15 11:51:22.604372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.621 [2024-07-15 11:51:22.604402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.611266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.611298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.611329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.618970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.618997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.619028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.626744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.626772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.626803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.634677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.634703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.634733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.641548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.641576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.641606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.649101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.649142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.649157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.656679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.664357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.664394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.664424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.672137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.672173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.672202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.680292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.680320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.680350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.687868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.687895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.687926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.695760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.695788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.695818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.703523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.703550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.703595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.711285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.711343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.718950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.718977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.719008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.880 [2024-07-15 11:51:22.726496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.880 [2024-07-15 11:51:22.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.880 [2024-07-15 11:51:22.726552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.734279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.734305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.734335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.741849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.741877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.741907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.748840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.748868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.748898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.756206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.756233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.756263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.763030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.763072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.763092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.770355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.770387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.770417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.779163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.779191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.779221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.786644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.786671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.786701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.793808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.793836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.793867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.800203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.800260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.807679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.807706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.807735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.813571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.813596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.819305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.819331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.819369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.825140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.825173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.825202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.830909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.830936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.830970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.836695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.836735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.836769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.842498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.842566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.848405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.848442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.848471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.854086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.854112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.854141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.859710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.859757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.859773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.881 [2024-07-15 11:51:22.865661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:14.881 [2024-07-15 11:51:22.865686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.881 [2024-07-15 11:51:22.865714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.871663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.871689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.871719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.877439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.877465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.877508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.883088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.883114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.883143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.888804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.888833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.888865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.894586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.894611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.894641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.900500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.142 [2024-07-15 11:51:22.900525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.142 [2024-07-15 11:51:22.900556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.142 [2024-07-15 11:51:22.906497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.906566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.912565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.912592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.912622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.918443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.918470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.918500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.924398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.924423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.924453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.930181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.930231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.930246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.935848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.935876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.935907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.941558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.941584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.941613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.947272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.947298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.947327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.952909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.952936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.952975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.958523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.958549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.964172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.964237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.970188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.970215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.970244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.975996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.976039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.976067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.982386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.982411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.982441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.988674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.988731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:22.994708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:22.994758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:22.994776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.000728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.000762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.000777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.006557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.006583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.006613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.012485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.012541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.019151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.019182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.019212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.025908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.025935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.025966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.032981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.033013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.033045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.039865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.039894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.048319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.048347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.048377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.055504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.055530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.055559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.059913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.059942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.059974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.067977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.068006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.068042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.076565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.076592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.076623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.085122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.085149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.094117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.094144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.094174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.102840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.102868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.102898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.112949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.112977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.113007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.143 [2024-07-15 11:51:23.122220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.143 [2024-07-15 11:51:23.122248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.143 [2024-07-15 11:51:23.122278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.132735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.132806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.140919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.140947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.140979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.149888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.149924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.149953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.158564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.158620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.167856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.167884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.167916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.176207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.176235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.176272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.184155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.184183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.184214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.191213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.191241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.191271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.197375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.197404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.197434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.204478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.204505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.204535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.211785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.211814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.211845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.219340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.219368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.219399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.227292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.227320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.227350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.234127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.234184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.241038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.241087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.241102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.247930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.247958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.401 [2024-07-15 11:51:23.247989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.401 [2024-07-15 11:51:23.254803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.401 [2024-07-15 11:51:23.254831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.254862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.261280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.261307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.261337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.268393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.268421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.268452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.275317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.275344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.275374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.281494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.281520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.281551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.288235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.288274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.288304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.295131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.295189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.302960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.302990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.303023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.309574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.309601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.309631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.316646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.316674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.316703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.323492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.323518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.323548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.330511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.330537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.330567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.337597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.337623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.337653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.345577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.345603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.345633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.353526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.353552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.353583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.361144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.361171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.361208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.368730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.368776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.368806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.375119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.375146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.379695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.379735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.402 [2024-07-15 11:51:23.386970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.402 [2024-07-15 11:51:23.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.402 [2024-07-15 11:51:23.387016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.397195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.397223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.397256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.405128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.405157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.405188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.411842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.411870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.411901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.419662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.419689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.419719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.427373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.427438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.435290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.435318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.435348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.442341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.442368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.442398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.449311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.449338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.449369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.457941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.457983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.458000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.465679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.465705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.465735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.473015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.473055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.473069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.480113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.661 [2024-07-15 11:51:23.480138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-15 11:51:23.480167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.661 [2024-07-15 11:51:23.487459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.662 [2024-07-15 11:51:23.487484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-15 11:51:23.487520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.662 [2024-07-15 11:51:23.495097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.662 [2024-07-15 11:51:23.495137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-15 11:51:23.495152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.662 [2024-07-15 11:51:23.502994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.662 [2024-07-15 11:51:23.503020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-15 11:51:23.503035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.662 [2024-07-15 11:51:23.510915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.662 [2024-07-15 11:51:23.510941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-15 11:51:23.510972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.662 [2024-07-15 11:51:23.519440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152cd60) 00:25:15.662 [2024-07-15 11:51:23.519465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-15 11:51:23.519499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.662 00:25:15.662 Latency(us) 00:25:15.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.662 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:15.662 nvme0n1 : 2.00 4315.94 539.49 0.00 0.00 3703.49 615.92 10291.58 00:25:15.662 =================================================================================================================== 00:25:15.662 Total : 4315.94 539.49 0.00 0.00 3703.49 615.92 10291.58 00:25:15.662 0 00:25:15.662 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:15.662 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:15.662 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:15.662 | .driver_specific 00:25:15.662 | .nvme_error 00:25:15.662 | .status_code 00:25:15.662 | .command_transient_transport_error' 00:25:15.662 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 278 > 0 )) 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3125229 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3125229 ']' 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3125229 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3125229 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3125229' 00:25:15.920 killing process with pid 3125229 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3125229 00:25:15.920 Received shutdown signal, test time was about 2.000000 seconds 00:25:15.920 00:25:15.920 Latency(us) 00:25:15.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.920 =================================================================================================================== 00:25:15.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.920 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3125229 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3125643 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3125643 /var/tmp/bperf.sock 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3125643 ']' 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.178 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.436 [2024-07-15 11:51:24.177201] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:16.436 [2024-07-15 11:51:24.177281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125643 ] 00:25:16.436 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.436 [2024-07-15 11:51:24.236421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.436 [2024-07-15 11:51:24.347265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.694 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.694 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:16.694 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.694 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.952 11:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.516 nvme0n1 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:17.516 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.516 Running I/O for 2 seconds... 00:25:17.516 [2024-07-15 11:51:25.363893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ee5c8 00:25:17.516 [2024-07-15 11:51:25.364837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.364875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.374406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fac10 00:25:17.516 [2024-07-15 11:51:25.375334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.375360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.387053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e9e10 00:25:17.516 [2024-07-15 11:51:25.388155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.388182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.398513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6300 00:25:17.516 [2024-07-15 11:51:25.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.399771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.408938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5ec8 00:25:17.516 [2024-07-15 11:51:25.410003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.410045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.420269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f6890 00:25:17.516 [2024-07-15 11:51:25.421337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.421371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.431534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f7970 00:25:17.516 [2024-07-15 11:51:25.432690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.432730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.441867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fb480 00:25:17.516 [2024-07-15 11:51:25.443084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.443123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.453359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f8a50 00:25:17.516 [2024-07-15 11:51:25.454643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.454668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.464667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f0350 00:25:17.516 [2024-07-15 11:51:25.466118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.466144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.476130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e3060 00:25:17.516 [2024-07-15 11:51:25.477732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.477764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.488119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fcdd0 00:25:17.516 [2024-07-15 11:51:25.489880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.489908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:17.516 [2024-07-15 11:51:25.495855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e84c0 00:25:17.516 [2024-07-15 11:51:25.496576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.516 [2024-07-15 11:51:25.496600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.507088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190dfdc0 00:25:17.775 [2024-07-15 11:51:25.507803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.518487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f5378 00:25:17.775 [2024-07-15 11:51:25.519381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.519407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.529983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f7970 00:25:17.775 [2024-07-15 11:51:25.530979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.531006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.541388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f96f8 00:25:17.775 [2024-07-15 11:51:25.542549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.542574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.552610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e2c28 00:25:17.775 [2024-07-15 11:51:25.553796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.553822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.563284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ddc00 00:25:17.775 [2024-07-15 11:51:25.563982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.564008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.574641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e9e10 00:25:17.775 [2024-07-15 11:51:25.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.575556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.585836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190de470 00:25:17.775 [2024-07-15 11:51:25.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.587012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.596045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1b48 00:25:17.775 [2024-07-15 11:51:25.597219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.597244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.607833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ec408 00:25:17.775 [2024-07-15 11:51:25.608892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.608918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.620446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc998 00:25:17.775 [2024-07-15 11:51:25.622464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.622489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.629031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ea248 00:25:17.775 [2024-07-15 11:51:25.629901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.641920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ea680 00:25:17.775 [2024-07-15 11:51:25.643046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.643071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.775 [2024-07-15 11:51:25.654591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f92c0 00:25:17.775 [2024-07-15 11:51:25.656510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.775 [2024-07-15 11:51:25.656534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.662376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4f40 00:25:17.776 [2024-07-15 11:51:25.663289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.663314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.673434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f3e60 00:25:17.776 [2024-07-15 11:51:25.674373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.674398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.685782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f2d80 00:25:17.776 [2024-07-15 11:51:25.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.687256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.695943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6b70 00:25:17.776 [2024-07-15 11:51:25.696973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.696998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.706681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ed0b0 00:25:17.776 [2024-07-15 11:51:25.707708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.707758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.717971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc998 00:25:17.776 [2024-07-15 11:51:25.719179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.719203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.728333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e3d08 00:25:17.776 [2024-07-15 11:51:25.729457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.729481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.740515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6b70 00:25:17.776 [2024-07-15 11:51:25.741807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.741832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:17.776 [2024-07-15 11:51:25.751844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1b48 00:25:17.776 [2024-07-15 11:51:25.753289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.776 [2024-07-15 11:51:25.753313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.762792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1710 00:25:18.035 [2024-07-15 11:51:25.764093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.774361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e9168 00:25:18.035 [2024-07-15 11:51:25.775668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.775692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.786646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4f40 00:25:18.035 [2024-07-15 11:51:25.788519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.788543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.794401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0ea0 00:25:18.035 [2024-07-15 11:51:25.795313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.795337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.806945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ea248 00:25:18.035 [2024-07-15 11:51:25.808009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.808034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.819392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ebb98 00:25:18.035 [2024-07-15 11:51:25.821255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.821279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.827213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ee5c8 00:25:18.035 [2024-07-15 11:51:25.828076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.839709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f8a50 00:25:18.035 [2024-07-15 11:51:25.840778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.840804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.850860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190df550 00:25:18.035 [2024-07-15 11:51:25.852198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.852222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.861049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1710 00:25:18.035 [2024-07-15 11:51:25.862306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.862330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.872184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eee38 00:25:18.035 [2024-07-15 11:51:25.873520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.883631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ec408 00:25:18.035 [2024-07-15 11:51:25.884538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.884562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.896369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ecc78 00:25:18.035 [2024-07-15 11:51:25.897956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.897981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.905677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e4140 00:25:18.035 [2024-07-15 11:51:25.906709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.906733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.916767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190edd58 00:25:18.035 [2024-07-15 11:51:25.917624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.917648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.928467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eaab8 00:25:18.035 [2024-07-15 11:51:25.929456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.929482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.939169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e9168 00:25:18.035 [2024-07-15 11:51:25.940864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.940891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.949812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1710 00:25:18.035 [2024-07-15 11:51:25.950686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.950710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.961598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e7818 00:25:18.035 [2024-07-15 11:51:25.962641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.962666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.972345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fd208 00:25:18.035 [2024-07-15 11:51:25.973401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.973426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.985009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1b48 00:25:18.035 [2024-07-15 11:51:25.986192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.035 [2024-07-15 11:51:25.986217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:18.035 [2024-07-15 11:51:25.996576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fcdd0 00:25:18.036 [2024-07-15 11:51:25.997955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.036 [2024-07-15 11:51:25.997985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:18.036 [2024-07-15 11:51:26.007332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f46d0 00:25:18.036 [2024-07-15 11:51:26.008645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.036 [2024-07-15 11:51:26.008670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:18.036 [2024-07-15 11:51:26.017992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e99d8 00:25:18.036 [2024-07-15 11:51:26.018914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.036 [2024-07-15 11:51:26.018940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.030110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190de038 00:25:18.294 [2024-07-15 11:51:26.030830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.030856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.043040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e3d08 00:25:18.294 [2024-07-15 11:51:26.044663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.044688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.053570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4f40 00:25:18.294 [2024-07-15 11:51:26.054769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.054795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.065068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ddc00 00:25:18.294 [2024-07-15 11:51:26.066125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.066149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.078127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5a90 00:25:18.294 [2024-07-15 11:51:26.080009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.080049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.086114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eaab8 00:25:18.294 [2024-07-15 11:51:26.086994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.087019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.096745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f8a50 00:25:18.294 [2024-07-15 11:51:26.097607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.109303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc998 00:25:18.294 [2024-07-15 11:51:26.110373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.110398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.120945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0630 00:25:18.294 [2024-07-15 11:51:26.122120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.122145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.131765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0a68 00:25:18.294 [2024-07-15 11:51:26.132869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.132896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.143853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190df118 00:25:18.294 [2024-07-15 11:51:26.144992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.294 [2024-07-15 11:51:26.145035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.294 [2024-07-15 11:51:26.155642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f6cc8 00:25:18.294 [2024-07-15 11:51:26.156801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.156828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.166230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5ec8 00:25:18.295 [2024-07-15 11:51:26.167281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.167306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.176856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0a68 00:25:18.295 [2024-07-15 11:51:26.177759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.177785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.190112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f92c0 00:25:18.295 [2024-07-15 11:51:26.191439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.191464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.200785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f8a50 00:25:18.295 [2024-07-15 11:51:26.202066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.202091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.211229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0a68 00:25:18.295 [2024-07-15 11:51:26.212115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.212140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.222808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190de8a8 00:25:18.295 [2024-07-15 11:51:26.223536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.223561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.234278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5ec8 00:25:18.295 [2024-07-15 11:51:26.235308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.235333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.244657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f1868 00:25:18.295 [2024-07-15 11:51:26.245652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.245677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.257240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190df118 00:25:18.295 [2024-07-15 11:51:26.258426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.258451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.295 [2024-07-15 11:51:26.269126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5ec8 00:25:18.295 [2024-07-15 11:51:26.270409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.295 [2024-07-15 11:51:26.270434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.553 [2024-07-15 11:51:26.280940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6b70 00:25:18.553 [2024-07-15 11:51:26.282308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.553 [2024-07-15 11:51:26.282333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.553 [2024-07-15 11:51:26.292915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f92c0 00:25:18.553 [2024-07-15 11:51:26.294362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.553 [2024-07-15 11:51:26.294392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.553 [2024-07-15 11:51:26.304151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e4578 00:25:18.553 [2024-07-15 11:51:26.305708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.553 [2024-07-15 11:51:26.305754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:18.553 [2024-07-15 11:51:26.314622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fe2e8 00:25:18.554 [2024-07-15 11:51:26.315779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.315805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.326047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f2d80 00:25:18.554 [2024-07-15 11:51:26.327077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.327102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.336418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6fa8 00:25:18.554 [2024-07-15 11:51:26.337568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.337592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.347642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f6020 00:25:18.554 [2024-07-15 11:51:26.348671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.359185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fbcf0 00:25:18.554 [2024-07-15 11:51:26.360180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.360206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.370700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f3e60 00:25:18.554 [2024-07-15 11:51:26.371867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.371894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.381476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fcdd0 00:25:18.554 [2024-07-15 11:51:26.382600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.382626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.393630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc998 00:25:18.554 [2024-07-15 11:51:26.394900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.394933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.405597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6b70 00:25:18.554 [2024-07-15 11:51:26.407116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.417766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190feb58 00:25:18.554 [2024-07-15 11:51:26.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.419359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.428459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc128 00:25:18.554 [2024-07-15 11:51:26.429627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.429652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.440059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f9b30 00:25:18.554 [2024-07-15 11:51:26.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.450564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0a68 00:25:18.554 [2024-07-15 11:51:26.452201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.452226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.460206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e7c50 00:25:18.554 [2024-07-15 11:51:26.460977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.461002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.471835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e27f0 00:25:18.554 [2024-07-15 11:51:26.472794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.472820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.483546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f46d0 00:25:18.554 [2024-07-15 11:51:26.484707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.484756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.495457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4298 00:25:18.554 [2024-07-15 11:51:26.496771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.496797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.505962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e95a0 00:25:18.554 [2024-07-15 11:51:26.506768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.506795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.517451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fd208 00:25:18.554 [2024-07-15 11:51:26.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.518471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.528051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fd640 00:25:18.554 [2024-07-15 11:51:26.528882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.554 [2024-07-15 11:51:26.528915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:18.554 [2024-07-15 11:51:26.539972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eee38 00:25:18.813 [2024-07-15 11:51:26.540803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.813 [2024-07-15 11:51:26.540831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:18.813 [2024-07-15 11:51:26.551925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ec408 00:25:18.813 [2024-07-15 11:51:26.552905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.813 [2024-07-15 11:51:26.552930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:18.813 [2024-07-15 11:51:26.563208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e9e10 00:25:18.813 [2024-07-15 11:51:26.564355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.813 [2024-07-15 11:51:26.564380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:18.813 [2024-07-15 11:51:26.575850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fe720 00:25:18.813 [2024-07-15 11:51:26.577143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.813 [2024-07-15 11:51:26.577169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:18.813 [2024-07-15 11:51:26.587537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc998 00:25:18.814 [2024-07-15 11:51:26.588968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.588994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.598886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f2510 00:25:18.814 [2024-07-15 11:51:26.600435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.600462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.610590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eaef0 00:25:18.814 [2024-07-15 11:51:26.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.612351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.622424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e7818 00:25:18.814 [2024-07-15 11:51:26.624255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.624292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.631472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ebfd0 00:25:18.814 [2024-07-15 11:51:26.632671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.632697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.643974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e12d8 00:25:18.814 [2024-07-15 11:51:26.645388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.645414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.655935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f5be8 00:25:18.814 [2024-07-15 11:51:26.657456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.657487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.667641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fb480 00:25:18.814 [2024-07-15 11:51:26.669335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.669361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.676584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f7100 00:25:18.814 [2024-07-15 11:51:26.677654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.688364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e88f8 00:25:18.814 [2024-07-15 11:51:26.689549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.689579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.700031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f92c0 00:25:18.814 [2024-07-15 11:51:26.701402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.701428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.711699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e7c50 00:25:18.814 [2024-07-15 11:51:26.713257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.713292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.723465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1b48 00:25:18.814 [2024-07-15 11:51:26.725153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.725179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.735201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6738 00:25:18.814 [2024-07-15 11:51:26.736973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.736999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.743250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f5be8 00:25:18.814 [2024-07-15 11:51:26.744089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.754037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f0350 00:25:18.814 [2024-07-15 11:51:26.754841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.754867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.765969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eb760 00:25:18.814 [2024-07-15 11:51:26.766897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.766923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.778064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5a90 00:25:18.814 [2024-07-15 11:51:26.779227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.779252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:18.814 [2024-07-15 11:51:26.791055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ebfd0 00:25:18.814 [2024-07-15 11:51:26.792367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.814 [2024-07-15 11:51:26.792392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:19.072 [2024-07-15 11:51:26.803293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e3498 00:25:19.072 [2024-07-15 11:51:26.804643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.072 [2024-07-15 11:51:26.804668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:19.072 [2024-07-15 11:51:26.815036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f3e60 00:25:19.072 [2024-07-15 11:51:26.816438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.072 [2024-07-15 11:51:26.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:19.072 [2024-07-15 11:51:26.825676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e8088 00:25:19.072 [2024-07-15 11:51:26.826943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.072 [2024-07-15 11:51:26.826969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.837305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e5220 00:25:19.073 [2024-07-15 11:51:26.838559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.838584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.848538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fcdd0 00:25:19.073 [2024-07-15 11:51:26.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.849724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.859808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f5be8 00:25:19.073 [2024-07-15 11:51:26.861173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.861197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.870164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e99d8 00:25:19.073 [2024-07-15 11:51:26.871495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.871519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.880302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f7538 00:25:19.073 [2024-07-15 11:51:26.881246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.881270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.891325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f92c0 00:25:19.073 [2024-07-15 11:51:26.892191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.892216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.903146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4298 00:25:19.073 [2024-07-15 11:51:26.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.914242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6738 00:25:19.073 [2024-07-15 11:51:26.915317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.915341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.925215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f1ca0 00:25:19.073 [2024-07-15 11:51:26.926284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.926309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.936535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eea00 00:25:19.073 [2024-07-15 11:51:26.937484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.937508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.946570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190de8a8 00:25:19.073 [2024-07-15 11:51:26.947650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.947673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.958269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fd208 00:25:19.073 [2024-07-15 11:51:26.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.969344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eee38 00:25:19.073 [2024-07-15 11:51:26.970277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.970301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.980490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ee5c8 00:25:19.073 [2024-07-15 11:51:26.981708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.981758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:26.991491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fa7d8 00:25:19.073 [2024-07-15 11:51:26.992731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:26.992775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.001564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e01f8 00:25:19.073 [2024-07-15 11:51:27.003137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.003161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.012805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1710 00:25:19.073 [2024-07-15 11:51:27.013997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.014023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.025466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fc128 00:25:19.073 [2024-07-15 11:51:27.027098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.027123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.034820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e8d30 00:25:19.073 [2024-07-15 11:51:27.036707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.036732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.045197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fe2e8 00:25:19.073 [2024-07-15 11:51:27.046145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.046169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.073 [2024-07-15 11:51:27.056373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f3a28 00:25:19.073 [2024-07-15 11:51:27.057365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.073 [2024-07-15 11:51:27.057392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.067984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e4140 00:25:19.331 [2024-07-15 11:51:27.068908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.068933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.079341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eff18 00:25:19.331 [2024-07-15 11:51:27.080341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.080365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.090411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190feb58 00:25:19.331 [2024-07-15 11:51:27.091540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.091565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.100633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f46d0 00:25:19.331 [2024-07-15 11:51:27.101707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.101753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.111874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e1710 00:25:19.331 [2024-07-15 11:51:27.112913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.112939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.124771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f9f68 00:25:19.331 [2024-07-15 11:51:27.126271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.126295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.135675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e12d8 00:25:19.331 [2024-07-15 11:51:27.137315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.137339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.147173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190fb8b8 00:25:19.331 [2024-07-15 11:51:27.149023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.149063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.155422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f57b0 00:25:19.331 [2024-07-15 11:51:27.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.156231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.167895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f2948 00:25:19.331 [2024-07-15 11:51:27.169815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.169841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.178401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f6890 00:25:19.331 [2024-07-15 11:51:27.179365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.179389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.189765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e4578 00:25:19.331 [2024-07-15 11:51:27.190747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.190771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.200126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f8e88 00:25:19.331 [2024-07-15 11:51:27.201173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.201197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.212283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f4298 00:25:19.331 [2024-07-15 11:51:27.213514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.213539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.223651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eff18 00:25:19.331 [2024-07-15 11:51:27.225008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.225048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.234081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190feb58 00:25:19.331 [2024-07-15 11:51:27.235410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.244230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f0ff8 00:25:19.331 [2024-07-15 11:51:27.245167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.245190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.255314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e27f0 00:25:19.331 [2024-07-15 11:51:27.256104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.256128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.266471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e88f8 00:25:19.331 [2024-07-15 11:51:27.267554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.267582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.331 [2024-07-15 11:51:27.277843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190eee38 00:25:19.331 [2024-07-15 11:51:27.278769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.331 [2024-07-15 11:51:27.278794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.332 [2024-07-15 11:51:27.287861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e0ea0 00:25:19.332 [2024-07-15 11:51:27.288918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.332 [2024-07-15 11:51:27.288943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.332 [2024-07-15 11:51:27.299517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f7970 00:25:19.332 [2024-07-15 11:51:27.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.332 [2024-07-15 11:51:27.300635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.332 [2024-07-15 11:51:27.309711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e12d8 00:25:19.332 [2024-07-15 11:51:27.310757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.332 [2024-07-15 11:51:27.310781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.589 [2024-07-15 11:51:27.322868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190f5378 00:25:19.589 [2024-07-15 11:51:27.324073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.589 [2024-07-15 11:51:27.324097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.589 [2024-07-15 11:51:27.334199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ee190 00:25:19.589 [2024-07-15 11:51:27.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.589 [2024-07-15 11:51:27.335560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.589 [2024-07-15 11:51:27.345225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190e6300 00:25:19.589 [2024-07-15 11:51:27.346588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.589 [2024-07-15 11:51:27.346611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.589 [2024-07-15 11:51:27.356117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12550d0) with pdu=0x2000190ff3c8 00:25:19.589 [2024-07-15 11:51:27.357468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.589 [2024-07-15 11:51:27.357492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.589 00:25:19.589 Latency(us) 00:25:19.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.589 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:19.589 nvme0n1 : 2.00 22812.36 89.11 0.00 0.00 5601.97 2208.81 13883.92 00:25:19.589 =================================================================================================================== 00:25:19.589 Total : 22812.36 89.11 0.00 0.00 5601.97 2208.81 13883.92 00:25:19.589 0 00:25:19.589 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:19.589 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:19.589 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:19.589 | .driver_specific 00:25:19.589 | .nvme_error 00:25:19.589 | .status_code 00:25:19.589 | .command_transient_transport_error' 00:25:19.589 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 179 > 0 )) 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3125643 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3125643 ']' 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3125643 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3125643 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3125643' 00:25:19.847 killing process with pid 3125643 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3125643 00:25:19.847 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.847 00:25:19.847 Latency(us) 00:25:19.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.847 =================================================================================================================== 00:25:19.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.847 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3125643 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3126171 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3126171 /var/tmp/bperf.sock 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3126171 ']' 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.105 11:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.105 [2024-07-15 11:51:27.959970] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:20.105 [2024-07-15 11:51:27.960065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126171 ] 00:25:20.105 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:20.105 Zero copy mechanism will not be used. 00:25:20.105 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.105 [2024-07-15 11:51:28.017709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.363 [2024-07-15 11:51:28.126747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.363 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.363 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:20.363 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.363 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.621 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.188 nvme0n1 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:21.188 11:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.188 Zero copy mechanism will not be used. 00:25:21.188 Running I/O for 2 seconds... 00:25:21.188 [2024-07-15 11:51:29.036913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.037235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.043896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.043994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.044022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.050440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.050726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.050783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.057227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.057493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.057519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.063985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.064268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.064294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.070732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.071030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.071057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.077190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.077456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.077482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.083074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.083374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.088727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.089062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.094588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.094903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.094937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.100331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.100616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.106122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.106381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.106407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.111759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.112026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.112053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.117412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.117669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.117695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.123021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.123291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.123317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.128907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.129184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.129209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.134869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.135150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.135176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.141946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.142248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.142275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.147779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.148052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.148092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.153807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.154109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.154134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.159656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.159934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.159961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.165443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.165706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.165753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.188 [2024-07-15 11:51:29.171428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.188 [2024-07-15 11:51:29.171703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.188 [2024-07-15 11:51:29.171728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.178162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.178534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.178575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.186475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.186746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.186787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.195031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.195370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.195396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.203976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.204330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.204357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.212043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.212311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.212337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.218818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.219127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.219152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.225803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.226124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.226149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.232124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.232411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.232437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.238216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.238480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.238505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.244469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.244758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.244784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.250506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.250800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.250827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.256866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.257157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.257182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.263241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.263520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.263551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.269761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.270048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.270073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.276409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.276678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.276703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.282867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.283150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.283176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.289386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.448 [2024-07-15 11:51:29.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.448 [2024-07-15 11:51:29.289709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.448 [2024-07-15 11:51:29.296068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.296346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.296373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.302506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.302813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.302842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.308965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.309249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.309274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.315103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.315369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.315395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.322227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.322564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.322589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.329498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.329806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.329833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.335876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.336159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.336184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.342226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.342489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.342515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.348408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.348698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.354665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.354954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.354980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.360840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.361125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.361150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.368701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.369083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.369109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.375918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.376221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.376247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.383351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.383651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.391590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.391981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.392022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.400931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.401215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.401240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.408932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.409207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.409233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.417331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.417623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.417649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.425635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.425941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.425968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.449 [2024-07-15 11:51:29.433782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.449 [2024-07-15 11:51:29.434058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.449 [2024-07-15 11:51:29.434086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.442065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.442335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.442360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.449528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.449821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.449855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.457409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.457680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.457705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.465439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.465726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.465774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.473176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.473435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.473460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.480505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.480801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.480829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.487597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.708 [2024-07-15 11:51:29.487883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.708 [2024-07-15 11:51:29.487911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.708 [2024-07-15 11:51:29.494847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.495124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.495149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.501315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.501579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.501604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.507055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.507308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.507333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.513310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.513577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.513602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.519276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.519542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.519567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.526170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.526434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.526459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.533551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.533858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.539792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.540084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.540125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.546503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.546806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.546834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.553133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.553398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.553423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.559307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.559573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.559599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.565236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.565499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.565533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.571228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.571516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.577382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.577645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.577671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.585182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.585471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.593156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.593512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.593537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.602013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.602394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.602419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.610548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.610967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.618790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.619107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.625878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.626194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.626219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.632898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.633208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.633235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.640616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.640994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.641034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.649365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.709 [2024-07-15 11:51:29.649663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.709 [2024-07-15 11:51:29.649689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.709 [2024-07-15 11:51:29.656281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.710 [2024-07-15 11:51:29.656550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.710 [2024-07-15 11:51:29.656575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.710 [2024-07-15 11:51:29.663035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.710 [2024-07-15 11:51:29.663302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.710 [2024-07-15 11:51:29.663327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.710 [2024-07-15 11:51:29.671097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.710 [2024-07-15 11:51:29.671398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.710 [2024-07-15 11:51:29.671424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.710 [2024-07-15 11:51:29.678985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.710 [2024-07-15 11:51:29.679340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.710 [2024-07-15 11:51:29.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.710 [2024-07-15 11:51:29.687545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.710 [2024-07-15 11:51:29.687920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.710 [2024-07-15 11:51:29.687949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.697033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.697383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.697410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.706415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.706780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.706809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.715480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.715893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.715920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.723892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.724212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.724239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.731091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.731366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.731392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.739567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.739922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.739951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.748155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.748520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.748562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.756250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.756575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.756602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.765463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.765764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.765793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.774142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.774501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.774535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.782613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.782977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.790959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.791276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.791303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.799076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.799420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.799446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.806607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.806953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.806982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.814797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.815136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.823460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.823836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.823865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.831601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.832008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.832053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.840342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.840681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.840708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.848374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.848652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.856952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.969 [2024-07-15 11:51:29.857299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.969 [2024-07-15 11:51:29.857326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.969 [2024-07-15 11:51:29.863854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.864205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.864230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.870069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.870329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.870356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.875829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.876114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.876140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.882067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.882330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.882357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.888576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.888894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.888923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.894404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.894665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.894691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.899976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.900259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.900286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.905688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.905986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.906014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.911815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.912104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.912131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.917578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.917897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.917925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.923698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.924007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.924051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.929828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.930171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.935779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.936075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.936118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.941940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.942237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.942263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.970 [2024-07-15 11:51:29.948810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:21.970 [2024-07-15 11:51:29.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.970 [2024-07-15 11:51:29.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.955799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.956065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.956110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.962605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.962905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.962932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.969934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.970222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.976178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.976446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.976473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.982319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.982586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.982612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.988491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.988792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.988820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:29.994517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:29.994809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:29.994836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:30.000572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:30.000866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:30.000894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:30.007364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:30.007716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:30.007757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:30.013185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:30.013457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:30.013487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:30.019274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:30.019566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.230 [2024-07-15 11:51:30.019596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.230 [2024-07-15 11:51:30.025170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.230 [2024-07-15 11:51:30.025431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.025458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.030861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.031151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.031178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.036769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.037116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.037142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.042498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.042817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.042844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.049399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.049681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.049710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.056293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.056576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.056604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.062854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.063143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.063170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.069488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.069788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.069818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.075223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.075487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.075513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.080933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.081276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.081303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.086868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.087172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.087199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.092512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.092798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.092826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.098164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.098433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.098460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.103892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.104174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.104201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.110362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.110770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.110796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.117569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.117864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.117892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.125908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.126219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.126245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.133393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.133791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.133834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.141344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.141694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.141747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.148380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.148673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.155525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.155821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.155850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.161863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.162168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.162194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.168697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.169049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.169091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.175949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.176230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.176256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.182801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.183230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.231 [2024-07-15 11:51:30.183255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.231 [2024-07-15 11:51:30.189996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.231 [2024-07-15 11:51:30.190331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.232 [2024-07-15 11:51:30.190358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.232 [2024-07-15 11:51:30.197044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.232 [2024-07-15 11:51:30.197391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.232 [2024-07-15 11:51:30.197418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.232 [2024-07-15 11:51:30.203800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.232 [2024-07-15 11:51:30.204084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.232 [2024-07-15 11:51:30.204110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.232 [2024-07-15 11:51:30.210128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.232 [2024-07-15 11:51:30.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.232 [2024-07-15 11:51:30.210544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.217264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.217514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.224482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.224858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.224899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.231186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.231449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.231476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.237006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.237265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.237299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.243239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.243513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.243538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.249310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.249572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.249606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.255181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.255444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.255470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.261131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.261391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.261417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.267197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.267459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.267485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.273154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.273450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.273476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.279336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.279616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.279642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.491 [2024-07-15 11:51:30.285515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.491 [2024-07-15 11:51:30.285804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.491 [2024-07-15 11:51:30.285832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.291594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.291897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.291924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.297554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.297852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.297880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.304140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.304410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.304436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.311627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.311953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.318000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.318277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.318303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.324137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.324406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.324431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.330177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.330440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.330466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.336539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.336860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.344267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.344541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.344568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.350918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.351207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.351233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.357346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.357636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.357662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.363482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.363777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.363804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.369608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.369913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.369942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.377106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.377379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.377405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.384026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.384304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.384330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.390609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.390890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.390916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.397095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.397366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.397391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.403573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.403898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.403933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.410085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.410382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.416456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.416750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.416776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.423083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.423356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.423381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.429645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.429965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.429993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.436495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.436790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.436817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.443170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.443442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.443469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.449621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.449945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.455983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.456270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.456297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.462408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.462687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.462713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.468855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.469149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.469175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.492 [2024-07-15 11:51:30.475474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.492 [2024-07-15 11:51:30.475769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.492 [2024-07-15 11:51:30.475796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.482233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.482493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.482519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.488192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.488455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.488482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.494281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.494568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.500429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.500705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.500754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.507282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.507638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.515563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.524640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.524942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.524969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.531666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.531960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.531988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.538166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.538446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.538472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.545082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.545354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.545381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.553077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.553367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.553399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.559518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.559820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.559847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.565200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.565462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.571056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.571319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.571345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.577175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.577455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.577488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.584604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.584925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.584954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.592404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.592674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.592701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.600239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.600680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.752 [2024-07-15 11:51:30.600707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.752 [2024-07-15 11:51:30.609800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.752 [2024-07-15 11:51:30.610159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.610186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.619113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.619491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.619531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.628069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.628378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.628405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.637863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.638249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.638289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.647401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.647708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.647733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.656208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.656571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.656602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.665426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.665784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.665811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.673635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.674033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.682762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.683140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.683190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.691958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.692319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.692345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.700772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.701063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.701089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.708621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.708944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.708970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.716644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.716939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.716966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.725302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.725620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.725652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.753 [2024-07-15 11:51:30.733957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:22.753 [2024-07-15 11:51:30.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.753 [2024-07-15 11:51:30.734336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.742399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.742734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.742769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.750779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.751070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.759099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.759417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.759443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.767619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.767926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.775552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.775904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.775931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.784866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.785198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.792152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.792423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.792448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.798840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.799153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.806938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.012 [2024-07-15 11:51:30.807293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.012 [2024-07-15 11:51:30.807318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.012 [2024-07-15 11:51:30.814732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.815024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.815065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.822754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.823064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.823091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.829793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.830111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.836856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.837148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.837181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.843455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.843722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.843771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.851486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.851820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.851847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.858522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.858828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.858855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.865127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.865394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.865419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.871970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.872259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.872284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.878422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.878688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.878713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.885610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.885923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.885950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.893094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.893361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.900818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.901105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.901131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.908782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.909097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.916645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.916961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.916988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.924517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.924811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.924847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.932472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.932780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.932807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.940258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.940554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.948201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.948502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.948528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.956242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.956585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.956610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.964380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.964747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.964774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.972369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.972682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.972707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.980154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.980479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.980505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.989152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.989503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.989530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.013 [2024-07-15 11:51:30.998140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.013 [2024-07-15 11:51:30.998445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.013 [2024-07-15 11:51:30.998473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.271 [2024-07-15 11:51:31.007328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.272 [2024-07-15 11:51:31.007640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.272 [2024-07-15 11:51:31.007666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.272 [2024-07-15 11:51:31.016292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.272 [2024-07-15 11:51:31.016661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.272 [2024-07-15 11:51:31.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.272 [2024-07-15 11:51:31.023562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.272 [2024-07-15 11:51:31.023937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.272 [2024-07-15 11:51:31.023979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.272 [2024-07-15 11:51:31.030543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.272 [2024-07-15 11:51:31.030843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.272 [2024-07-15 11:51:31.030871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.272 [2024-07-15 11:51:31.036894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x134a2e0) with pdu=0x2000190fef90 00:25:23.272 [2024-07-15 11:51:31.036985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.272 [2024-07-15 11:51:31.037011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.272 00:25:23.272 Latency(us) 00:25:23.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.272 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:23.272 nvme0n1 : 2.00 4379.94 547.49 0.00 0.00 3644.74 1966.08 9611.95 00:25:23.272 =================================================================================================================== 00:25:23.272 Total : 4379.94 547.49 0.00 0.00 3644.74 1966.08 9611.95 00:25:23.272 0 00:25:23.272 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.272 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.272 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:23.272 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.272 | .driver_specific 00:25:23.272 | .nvme_error 00:25:23.272 | .status_code 00:25:23.272 | .command_transient_transport_error' 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 283 > 0 )) 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3126171 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3126171 ']' 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3126171 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3126171 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3126171' 00:25:23.530 killing process with pid 3126171 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3126171 00:25:23.530 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.530 00:25:23.530 Latency(us) 00:25:23.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.530 =================================================================================================================== 00:25:23.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.530 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3126171 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3124677 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3124677 ']' 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3124677 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3124677 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3124677' 00:25:23.788 killing process with pid 3124677 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3124677 00:25:23.788 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3124677 00:25:24.046 00:25:24.046 real 0m15.742s 00:25:24.046 user 0m30.481s 00:25:24.046 sys 0m5.098s 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.046 ************************************ 00:25:24.046 END TEST nvmf_digest_error 00:25:24.046 ************************************ 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.046 rmmod nvme_tcp 00:25:24.046 rmmod nvme_fabrics 00:25:24.046 rmmod nvme_keyring 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3124677 ']' 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3124677 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3124677 ']' 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3124677 00:25:24.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3124677) - No such process 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3124677 is not found' 00:25:24.046 Process with pid 3124677 is not found 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.046 11:51:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.590 11:51:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:26.590 00:25:26.590 real 0m35.328s 00:25:26.590 user 1m0.581s 00:25:26.590 sys 0m11.663s 00:25:26.590 11:51:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.590 11:51:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 ************************************ 00:25:26.590 END TEST nvmf_digest 00:25:26.590 ************************************ 00:25:26.590 11:51:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:26.590 11:51:34 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:26.590 11:51:34 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:26.590 11:51:34 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:26.590 11:51:34 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:26.590 11:51:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:26.590 11:51:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.590 11:51:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 ************************************ 00:25:26.590 START TEST nvmf_bdevperf 00:25:26.590 ************************************ 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:26.590 * Looking for test storage... 00:25:26.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.590 11:51:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.591 11:51:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:28.491 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:28.491 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:28.491 Found net devices under 0000:84:00.0: cvl_0_0 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.491 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:28.492 Found net devices under 0000:84:00.1: cvl_0_1 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:25:28.492 00:25:28.492 --- 10.0.0.2 ping statistics --- 00:25:28.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.492 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:25:28.492 00:25:28.492 --- 10.0.0.1 ping statistics --- 00:25:28.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.492 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3128535 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3128535 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3128535 ']' 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.492 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.492 [2024-07-15 11:51:36.360363] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:28.492 [2024-07-15 11:51:36.360460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.492 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.492 [2024-07-15 11:51:36.425442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:28.750 [2024-07-15 11:51:36.539070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.750 [2024-07-15 11:51:36.539133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.750 [2024-07-15 11:51:36.539161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.750 [2024-07-15 11:51:36.539173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.750 [2024-07-15 11:51:36.539183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.750 [2024-07-15 11:51:36.539275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.750 [2024-07-15 11:51:36.539341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.750 [2024-07-15 11:51:36.539345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.750 [2024-07-15 11:51:36.682693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.750 Malloc0 00:25:28.750 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.751 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.009 [2024-07-15 11:51:36.745527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.009 { 00:25:29.009 "params": { 00:25:29.009 "name": "Nvme$subsystem", 00:25:29.009 "trtype": "$TEST_TRANSPORT", 00:25:29.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.009 "adrfam": "ipv4", 00:25:29.009 "trsvcid": "$NVMF_PORT", 00:25:29.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.009 "hdgst": ${hdgst:-false}, 00:25:29.009 "ddgst": ${ddgst:-false} 00:25:29.009 }, 00:25:29.009 "method": "bdev_nvme_attach_controller" 00:25:29.009 } 00:25:29.009 EOF 00:25:29.009 )") 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:29.009 11:51:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:29.009 "params": { 00:25:29.009 "name": "Nvme1", 00:25:29.009 "trtype": "tcp", 00:25:29.009 "traddr": "10.0.0.2", 00:25:29.009 "adrfam": "ipv4", 00:25:29.009 "trsvcid": "4420", 00:25:29.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.009 "hdgst": false, 00:25:29.009 "ddgst": false 00:25:29.009 }, 00:25:29.009 "method": "bdev_nvme_attach_controller" 00:25:29.009 }' 00:25:29.009 [2024-07-15 11:51:36.792983] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:29.009 [2024-07-15 11:51:36.793082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128568 ] 00:25:29.009 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.009 [2024-07-15 11:51:36.855313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.009 [2024-07-15 11:51:36.969303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.267 Running I/O for 1 seconds... 00:25:30.673 00:25:30.673 Latency(us) 00:25:30.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.673 Verification LBA range: start 0x0 length 0x4000 00:25:30.673 Nvme1n1 : 1.04 8617.28 33.66 0.00 0.00 14233.72 2876.30 44855.75 00:25:30.673 =================================================================================================================== 00:25:30.673 Total : 8617.28 33.66 0.00 0.00 14233.72 2876.30 44855.75 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3128821 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:30.673 { 00:25:30.673 "params": { 00:25:30.673 "name": "Nvme$subsystem", 00:25:30.673 "trtype": "$TEST_TRANSPORT", 00:25:30.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.673 "adrfam": "ipv4", 00:25:30.673 "trsvcid": "$NVMF_PORT", 00:25:30.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.673 "hdgst": ${hdgst:-false}, 00:25:30.673 "ddgst": ${ddgst:-false} 00:25:30.673 }, 00:25:30.673 "method": "bdev_nvme_attach_controller" 00:25:30.673 } 00:25:30.673 EOF 00:25:30.673 )") 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:30.673 11:51:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:30.673 "params": { 00:25:30.673 "name": "Nvme1", 00:25:30.673 "trtype": "tcp", 00:25:30.673 "traddr": "10.0.0.2", 00:25:30.673 "adrfam": "ipv4", 00:25:30.673 "trsvcid": "4420", 00:25:30.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.673 "hdgst": false, 00:25:30.673 "ddgst": false 00:25:30.673 }, 00:25:30.673 "method": "bdev_nvme_attach_controller" 00:25:30.673 }' 00:25:30.673 [2024-07-15 11:51:38.500247] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:30.673 [2024-07-15 11:51:38.500324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128821 ] 00:25:30.673 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.673 [2024-07-15 11:51:38.559413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.934 [2024-07-15 11:51:38.672264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.191 Running I/O for 15 seconds... 00:25:33.713 11:51:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3128535 00:25:33.713 11:51:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:33.713 [2024-07-15 11:51:41.467110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.467983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.713 [2024-07-15 11:51:41.468453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.713 [2024-07-15 11:51:41.468720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.713 [2024-07-15 11:51:41.468759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.468980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.468995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.469979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.469993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.714 [2024-07-15 11:51:41.470519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.714 [2024-07-15 11:51:41.470532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.470975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.470989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.715 [2024-07-15 11:51:41.471202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.715 [2024-07-15 11:51:41.471227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28e60 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.471256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.715 [2024-07-15 11:51:41.471266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.715 [2024-07-15 11:51:41.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45984 len:8 PRP1 0x0 PRP2 0x0 00:25:33.715 [2024-07-15 11:51:41.471289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.715 [2024-07-15 11:51:41.471345] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe28e60 was disconnected and freed. reset controller. 00:25:33.715 [2024-07-15 11:51:41.474574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.474646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.475472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.475522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.475537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.475754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.475973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.475994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.476012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.479108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.488404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.488801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.488830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.488851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.489078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.489271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.489290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.489302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.492366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.501687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.502031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.502062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.502091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.502281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.502473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.502492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.502504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.505492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.514887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.515299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.515323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.515350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.515539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.515756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.515776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.515789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.518684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.528102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.528574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.528612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.528627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.528843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.529056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.529079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.529092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.532007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.541295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.541764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.541821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.541835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.542050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.542260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.542280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.542291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.545182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.554490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.554821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.554860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.715 [2024-07-15 11:51:41.555048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.715 [2024-07-15 11:51:41.555240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.715 [2024-07-15 11:51:41.555259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.715 [2024-07-15 11:51:41.555271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.715 [2024-07-15 11:51:41.558202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.715 [2024-07-15 11:51:41.567764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.715 [2024-07-15 11:51:41.568218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.715 [2024-07-15 11:51:41.568255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.715 [2024-07-15 11:51:41.568270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.568459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.568651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.568670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.568682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.571617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.580988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.581398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.581449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.581462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.581664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.581890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.581911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.581924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.584876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.594098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.594475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.594514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.594528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.594730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.594980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.595000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.595013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.597910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.607143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.607518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.607556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.607569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.607799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.608019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.608040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.608052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.610948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.620356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.620828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.620854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.620882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.621088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.621303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.621323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.621336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.624416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.633558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.633981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.634008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.634043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.634251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.634444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.634463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.634475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.637476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.646944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.647403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.647441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.647455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.647643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.647874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.647896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.647909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.650898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.660068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.660517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.660555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.660570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.660796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.661016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.661037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.661054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.663945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.673139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.673487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.673512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.673525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.673714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.673942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.673963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.673976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.676864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.716 [2024-07-15 11:51:41.686265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.716 [2024-07-15 11:51:41.686634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.716 [2024-07-15 11:51:41.686672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.716 [2024-07-15 11:51:41.686686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.716 [2024-07-15 11:51:41.686917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.716 [2024-07-15 11:51:41.687135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.716 [2024-07-15 11:51:41.687154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.716 [2024-07-15 11:51:41.687166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.716 [2024-07-15 11:51:41.690058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.974 [2024-07-15 11:51:41.699896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.974 [2024-07-15 11:51:41.700322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.974 [2024-07-15 11:51:41.700346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.974 [2024-07-15 11:51:41.700359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.974 [2024-07-15 11:51:41.700562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.974 [2024-07-15 11:51:41.700778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.974 [2024-07-15 11:51:41.700799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.974 [2024-07-15 11:51:41.700825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.703746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.712917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.713290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.713332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.713346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.713548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.713764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.713784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.713797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.716691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.725999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.726459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.726483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.726511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.726714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.726960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.726982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.726995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.730259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.739489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.739925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.739951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.739981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.740226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.740431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.740451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.740464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.743470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.752786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.753265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.753288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.753317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.753505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.753703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.753744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.753761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.756734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.766056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.766504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.766528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.766556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.766763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.766956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.766975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.766987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.769865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.779286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.779658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.779697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.779710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.779945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.780176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.780195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.780207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.783196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.792562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.975 [2024-07-15 11:51:41.793051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.975 [2024-07-15 11:51:41.793092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.975 [2024-07-15 11:51:41.793106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.975 [2024-07-15 11:51:41.793294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.975 [2024-07-15 11:51:41.793487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.975 [2024-07-15 11:51:41.793505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.975 [2024-07-15 11:51:41.793517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.975 [2024-07-15 11:51:41.796558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.975 [2024-07-15 11:51:41.805926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.806350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.806373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.806386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.806591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.806830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.806852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.806866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.809820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.818976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.819413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.819437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.819465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.819654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.819892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.819913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.819925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.822819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.832061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.832498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.832522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.832550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.832756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.832971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.832990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.833003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.835793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.845144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.845652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.845690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.845709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.845947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.846174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.846193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.846206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.849095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.858193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.858565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.858604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.858617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.858864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.859076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.859096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.859109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.861999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.871208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.871545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.871570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.871584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.871799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.872019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.872039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.872052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.874964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.884239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.884577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.884602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.884616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.884847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.976 [2024-07-15 11:51:41.885062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.976 [2024-07-15 11:51:41.885087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.976 [2024-07-15 11:51:41.885100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.976 [2024-07-15 11:51:41.887990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.976 [2024-07-15 11:51:41.897268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.976 [2024-07-15 11:51:41.897675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.976 [2024-07-15 11:51:41.897699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.976 [2024-07-15 11:51:41.897727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.976 [2024-07-15 11:51:41.897949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.977 [2024-07-15 11:51:41.898166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.977 [2024-07-15 11:51:41.898185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.977 [2024-07-15 11:51:41.898197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.977 [2024-07-15 11:51:41.901086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.977 [2024-07-15 11:51:41.910343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.977 [2024-07-15 11:51:41.910749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.977 [2024-07-15 11:51:41.910773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.977 [2024-07-15 11:51:41.910801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.977 [2024-07-15 11:51:41.910995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.977 [2024-07-15 11:51:41.911206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.977 [2024-07-15 11:51:41.911225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.977 [2024-07-15 11:51:41.911237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.977 [2024-07-15 11:51:41.914151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.977 [2024-07-15 11:51:41.923367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.977 [2024-07-15 11:51:41.923720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.977 [2024-07-15 11:51:41.923764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.977 [2024-07-15 11:51:41.923779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.977 [2024-07-15 11:51:41.923981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.977 [2024-07-15 11:51:41.924173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.977 [2024-07-15 11:51:41.924192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.977 [2024-07-15 11:51:41.924204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.977 [2024-07-15 11:51:41.927018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.977 [2024-07-15 11:51:41.936490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.977 [2024-07-15 11:51:41.936838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.977 [2024-07-15 11:51:41.936863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.977 [2024-07-15 11:51:41.936878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.977 [2024-07-15 11:51:41.937066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.977 [2024-07-15 11:51:41.937258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.977 [2024-07-15 11:51:41.937277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.977 [2024-07-15 11:51:41.937289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.977 [2024-07-15 11:51:41.940190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.977 [2024-07-15 11:51:41.949651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.977 [2024-07-15 11:51:41.950012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.977 [2024-07-15 11:51:41.950037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:33.977 [2024-07-15 11:51:41.950065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:33.977 [2024-07-15 11:51:41.950254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:33.977 [2024-07-15 11:51:41.950446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.977 [2024-07-15 11:51:41.950465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.977 [2024-07-15 11:51:41.950477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.977 [2024-07-15 11:51:41.953401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.234 [2024-07-15 11:51:41.963202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.234 [2024-07-15 11:51:41.963574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 11:51:41.963612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.234 [2024-07-15 11:51:41.963625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.234 [2024-07-15 11:51:41.963871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:41.964077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:41.964097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:41.964109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:41.967444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:41.976177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:41.976601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:41.976639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:41.976653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:41.976908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:41.977134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:41.977155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:41.977168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:41.980350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:41.989379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:41.989824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:41.989863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:41.989879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:41.990096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:41.990289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:41.990308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:41.990320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:41.993346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.002521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.002887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.002926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.002940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.003158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.003351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.003370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.003382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.006307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.015595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.016038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.016076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.016091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.016279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.016471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.016490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.016507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.019363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.028658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.029062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.029116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.029339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.029531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.029550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.029562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.032480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.041627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.041999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.042025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.042039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.042242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.042435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.042453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.042465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.045373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.054821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.055193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.055231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.055245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.055447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.055639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.055658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.055670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.058586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.067935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.068310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.068352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.068366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.068568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.068786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.068806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.068818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.071611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.081098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.081502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.081526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.081540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.081768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.081989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.082009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.082036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.084926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.094234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.235 [2024-07-15 11:51:42.094608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 11:51:42.094646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.235 [2024-07-15 11:51:42.094659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.235 [2024-07-15 11:51:42.094917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.235 [2024-07-15 11:51:42.095159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.235 [2024-07-15 11:51:42.095178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.235 [2024-07-15 11:51:42.095190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.235 [2024-07-15 11:51:42.098079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.235 [2024-07-15 11:51:42.107334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.107705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.107749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.107764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.107966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.108163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.108182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.108194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.111011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.120365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.120804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.120843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.120857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.121055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.121247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.121266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.121278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.124178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.133440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.133817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.133856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.133869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.134072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.134265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.134284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.134296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.137196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.146494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.146865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.146903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.146917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.147119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.147311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.147330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.147342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.150162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.159529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.159971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.159995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.160023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.160222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.160415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.160433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.160445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.163346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.172607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.172984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.173023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.173036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.173239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.173432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.173450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.173462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.176380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.185681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.186065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.186104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.186118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.186307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.186500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.186518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.186530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.189307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.198734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.199156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.199179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.199212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.199402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.199594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.199613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.199625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.202526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.236 [2024-07-15 11:51:42.211872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.236 [2024-07-15 11:51:42.212302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 11:51:42.212326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.236 [2024-07-15 11:51:42.212354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.236 [2024-07-15 11:51:42.212542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.236 [2024-07-15 11:51:42.212734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.236 [2024-07-15 11:51:42.212782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.236 [2024-07-15 11:51:42.212796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.236 [2024-07-15 11:51:42.215716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.494 [2024-07-15 11:51:42.225125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.494 [2024-07-15 11:51:42.225606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.494 [2024-07-15 11:51:42.225648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.494 [2024-07-15 11:51:42.225662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.225913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.226158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.226178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.226190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.229096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.238496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.238985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.239026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.239041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.239264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.239461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.239486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.239499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.242621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.251785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.252248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.252271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.252300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.252489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.252681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.252699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.252711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.255763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.264959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.265424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.265447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.265475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.265664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.265906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.265928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.265940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.268790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.278162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.278580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.278604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.278632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.278864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.279070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.279090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.279103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.281993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.291236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.291696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.291734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.291759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.291973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.292188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.292207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.292219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.295109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.304339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.304804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.304829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.304843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.305052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.305261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.305280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.305292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.308211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.317478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.317938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.317976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.317990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.318179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.318371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.318390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.318402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.321302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.330568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.331015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.331038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.331067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.331260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.331453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.331472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.331484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.334405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.343731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.344195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.344219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.344247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.495 [2024-07-15 11:51:42.344436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.495 [2024-07-15 11:51:42.344628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.495 [2024-07-15 11:51:42.344647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.495 [2024-07-15 11:51:42.344659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.495 [2024-07-15 11:51:42.347579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.495 [2024-07-15 11:51:42.356886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.495 [2024-07-15 11:51:42.357356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.495 [2024-07-15 11:51:42.357407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.495 [2024-07-15 11:51:42.357421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.357623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.357843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.357863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.357876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.360666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.370053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.370505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.370554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.370567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.370812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.371018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.371054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.371072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.374001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.383197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.383643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.383667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.383697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.383919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.384151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.384171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.384183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.387074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.396362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.396819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.396847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.397036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.397228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.397246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.397259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.400205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.409416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.409882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.409907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.409921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.410109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.410302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.410320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.410333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.413253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.422750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.423163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.423225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.423240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.423443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.423636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.423656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.423668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.426588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.435952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.436345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.436398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.436412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.436614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.436835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.436855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.436868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.439796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.449198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.449571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.449628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.449641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.449874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.450107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.450126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.450138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.453017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.462203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.462646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.462697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.462711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.462939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.463158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.463177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.463189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.465967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.496 [2024-07-15 11:51:42.475307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.496 [2024-07-15 11:51:42.475721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.496 [2024-07-15 11:51:42.475794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.496 [2024-07-15 11:51:42.475809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.496 [2024-07-15 11:51:42.476024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.496 [2024-07-15 11:51:42.476251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.496 [2024-07-15 11:51:42.476270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.496 [2024-07-15 11:51:42.476282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.496 [2024-07-15 11:51:42.479632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.488817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.489278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.489326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.489340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.489528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.489735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.489766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.489779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.492762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.502104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.502469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.502524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.502538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.502751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.502950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.502970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.502982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.505938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.515324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.515702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.515785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.515801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.516009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.516220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.516239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.516251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.519169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.528547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.528914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.528981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.528995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.529197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.529390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.529409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.529421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.532236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.541602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.542088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.542136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.542149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.542353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.542546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.542564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.542576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.545498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.554856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.555273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.555298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.555332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.555543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.755 [2024-07-15 11:51:42.555744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.755 [2024-07-15 11:51:42.555790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.755 [2024-07-15 11:51:42.555803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.755 [2024-07-15 11:51:42.558719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.755 [2024-07-15 11:51:42.568220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.755 [2024-07-15 11:51:42.568637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.755 [2024-07-15 11:51:42.568689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.755 [2024-07-15 11:51:42.568703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.755 [2024-07-15 11:51:42.568926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.569144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.569164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.569177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.572461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.581699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.582161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.582186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.582216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.582437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.582671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.582691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.582704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.585963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.595242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.595692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.595744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.595761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.595989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.596224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.596249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.596262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.599305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.608635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.609077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.609134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.609148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.609350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.609543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.609562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.609574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.612595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.621944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.622378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.622426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.622439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.622642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.622868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.622889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.622902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.625962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.635264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.635635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.635694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.635708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.635942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.636163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.636182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.636195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.639248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.648524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.648898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.648939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.648954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.649179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.649371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.649390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.649402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.652371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.661917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.662343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.662367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.662380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.662583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.662818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.662839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.662852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.665851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.675094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.675565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.675621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.675634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.756 [2024-07-15 11:51:42.675882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.756 [2024-07-15 11:51:42.676115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.756 [2024-07-15 11:51:42.676135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.756 [2024-07-15 11:51:42.676147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.756 [2024-07-15 11:51:42.679046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.756 [2024-07-15 11:51:42.688313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.756 [2024-07-15 11:51:42.688687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.756 [2024-07-15 11:51:42.688724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.756 [2024-07-15 11:51:42.688745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.757 [2024-07-15 11:51:42.688959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.757 [2024-07-15 11:51:42.689170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.757 [2024-07-15 11:51:42.689190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.757 [2024-07-15 11:51:42.689202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.757 [2024-07-15 11:51:42.692138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.757 [2024-07-15 11:51:42.701481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.757 [2024-07-15 11:51:42.701898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.757 [2024-07-15 11:51:42.701922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.757 [2024-07-15 11:51:42.701950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.757 [2024-07-15 11:51:42.702139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.757 [2024-07-15 11:51:42.702331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.757 [2024-07-15 11:51:42.702349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.757 [2024-07-15 11:51:42.702361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.757 [2024-07-15 11:51:42.705277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.757 [2024-07-15 11:51:42.714546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.757 [2024-07-15 11:51:42.714976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.757 [2024-07-15 11:51:42.715016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.757 [2024-07-15 11:51:42.715031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.757 [2024-07-15 11:51:42.715256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.757 [2024-07-15 11:51:42.715448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.757 [2024-07-15 11:51:42.715467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.757 [2024-07-15 11:51:42.715479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.757 [2024-07-15 11:51:42.718371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.757 [2024-07-15 11:51:42.727732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.757 [2024-07-15 11:51:42.728193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.757 [2024-07-15 11:51:42.728217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:34.757 [2024-07-15 11:51:42.728230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:34.757 [2024-07-15 11:51:42.728432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:34.757 [2024-07-15 11:51:42.728625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.757 [2024-07-15 11:51:42.728643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.757 [2024-07-15 11:51:42.728660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.757 [2024-07-15 11:51:42.731629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.741556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.742008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.742049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.742063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.742266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.742458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.742477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.742489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.745527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.754885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.755351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.755389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.755403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.755610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.755838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.755859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.755872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.758785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.767979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.768391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.768415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.768442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.768631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.768854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.768875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.768887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.771677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.781224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.781695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.781755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.781770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.781978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.782189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.782208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.782220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.785149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.794218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.794667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.794714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.794727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.794956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.795168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.795187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.795199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.797973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.807341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.807755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.807779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.807807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.807996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.808188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.808207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.808219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.811142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.820482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.820898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.820945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.820959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.821148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.821345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.821364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.821376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.824278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.833581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.834002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.834039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.834054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.834242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.834435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.834453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.834465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.837281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.846594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.847029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.847053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.016 [2024-07-15 11:51:42.847081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.016 [2024-07-15 11:51:42.847280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.016 [2024-07-15 11:51:42.847473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.016 [2024-07-15 11:51:42.847491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.016 [2024-07-15 11:51:42.847503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.016 [2024-07-15 11:51:42.850320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.016 [2024-07-15 11:51:42.859672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.016 [2024-07-15 11:51:42.860167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.016 [2024-07-15 11:51:42.860212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.860225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.860427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.860619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.860638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.860650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.863569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.872715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.873156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.873193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.873208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.873395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.873588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.873606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.873618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.876511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.885856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.886293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.886331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.886345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.886533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.886726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.886766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.886781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.889574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.898932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.899359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.899421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.899623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.899844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.899865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.899877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.902666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.912101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.912547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.912598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.912616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.912864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.913069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.913105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.913118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.916027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.925201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.925661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.925710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.925724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.925953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.926166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.926185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.926197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.928972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.938287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.938751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.938790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.938804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.939006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.939198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.939217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.939229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.942044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.951394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.951838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.951879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.951892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.952095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.952287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.952310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.952323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.955239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.964546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.964988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.965026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.965041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.965230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.965422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.965441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.965453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.968354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.977616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.978074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.978097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.978125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.978313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.978506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.978524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.978536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.981454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.017 [2024-07-15 11:51:42.990717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.017 [2024-07-15 11:51:42.991164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.017 [2024-07-15 11:51:42.991193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.017 [2024-07-15 11:51:42.991223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.017 [2024-07-15 11:51:42.991424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.017 [2024-07-15 11:51:42.991628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.017 [2024-07-15 11:51:42.991648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.017 [2024-07-15 11:51:42.991660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.017 [2024-07-15 11:51:42.994858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.004324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.004771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.004797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.004812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.005013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.005239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.005258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.005270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.008546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.017552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.018042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.018066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.018094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.018283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.018475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.018494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.018506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.021435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.030628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.031059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.031102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.031116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.031318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.031510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.031529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.031541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.034356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.043708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.044155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.044179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.044207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.044400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.044592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.044611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.044623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.047440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.056798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.057247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.057271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.057300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.057489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.057682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.057701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.057713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.060619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.069892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.070341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.070390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.070403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.276 [2024-07-15 11:51:43.070606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.276 [2024-07-15 11:51:43.070826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.276 [2024-07-15 11:51:43.070847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.276 [2024-07-15 11:51:43.070859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.276 [2024-07-15 11:51:43.073649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.276 [2024-07-15 11:51:43.083039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.276 [2024-07-15 11:51:43.083453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.276 [2024-07-15 11:51:43.083476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.276 [2024-07-15 11:51:43.083490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.083692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.083934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.083956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.083974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.086883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.096078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.096506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.096557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.096769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.096969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.096988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.097001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.099792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.109152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.109599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.109638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.109653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.109887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.110102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.110135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.110148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.113055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.122120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.122538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.122590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.122604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.122848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.123054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.123073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.123086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.125977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.135220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.135671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.135725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.135753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.135977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.136190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.136209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.136221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.138997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.148315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.148779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.148802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.148815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.149017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.149210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.149228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.149240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.152054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.161405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.161809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.161833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.161847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.162036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.162228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.162247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.162259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.165159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.174421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.174852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.174890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.174905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.175093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.175290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.175308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.175320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.277 [2024-07-15 11:51:43.178241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.277 [2024-07-15 11:51:43.187586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.277 [2024-07-15 11:51:43.187950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.277 [2024-07-15 11:51:43.188002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.277 [2024-07-15 11:51:43.188017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.277 [2024-07-15 11:51:43.188241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.277 [2024-07-15 11:51:43.188435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.277 [2024-07-15 11:51:43.188453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.277 [2024-07-15 11:51:43.188465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.191357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.278 [2024-07-15 11:51:43.200886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.278 [2024-07-15 11:51:43.201328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.278 [2024-07-15 11:51:43.201386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.278 [2024-07-15 11:51:43.201400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.278 [2024-07-15 11:51:43.201608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.278 [2024-07-15 11:51:43.201839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.278 [2024-07-15 11:51:43.201860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.278 [2024-07-15 11:51:43.201874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.204896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.278 [2024-07-15 11:51:43.214249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.278 [2024-07-15 11:51:43.214673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.278 [2024-07-15 11:51:43.214721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.278 [2024-07-15 11:51:43.214735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.278 [2024-07-15 11:51:43.214973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.278 [2024-07-15 11:51:43.215194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.278 [2024-07-15 11:51:43.215213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.278 [2024-07-15 11:51:43.215226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.218280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.278 [2024-07-15 11:51:43.227445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.278 [2024-07-15 11:51:43.227851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.278 [2024-07-15 11:51:43.227877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.278 [2024-07-15 11:51:43.227892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.278 [2024-07-15 11:51:43.228121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.278 [2024-07-15 11:51:43.228319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.278 [2024-07-15 11:51:43.228338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.278 [2024-07-15 11:51:43.228351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.231362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.278 [2024-07-15 11:51:43.240623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.278 [2024-07-15 11:51:43.241077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.278 [2024-07-15 11:51:43.241120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.278 [2024-07-15 11:51:43.241134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.278 [2024-07-15 11:51:43.241360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.278 [2024-07-15 11:51:43.241565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.278 [2024-07-15 11:51:43.241599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.278 [2024-07-15 11:51:43.241611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.244867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.278 [2024-07-15 11:51:43.253922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.278 [2024-07-15 11:51:43.254380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.278 [2024-07-15 11:51:43.254419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.278 [2024-07-15 11:51:43.254433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.278 [2024-07-15 11:51:43.254627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.278 [2024-07-15 11:51:43.254857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.278 [2024-07-15 11:51:43.254878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.278 [2024-07-15 11:51:43.254891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.278 [2024-07-15 11:51:43.257931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.267551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.536 [2024-07-15 11:51:43.268073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.536 [2024-07-15 11:51:43.268097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.536 [2024-07-15 11:51:43.268132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.536 [2024-07-15 11:51:43.268327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.536 [2024-07-15 11:51:43.268525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.536 [2024-07-15 11:51:43.268545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.536 [2024-07-15 11:51:43.268557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.536 [2024-07-15 11:51:43.271568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.280844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.536 [2024-07-15 11:51:43.281311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.536 [2024-07-15 11:51:43.281349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.536 [2024-07-15 11:51:43.281364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.536 [2024-07-15 11:51:43.281559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.536 [2024-07-15 11:51:43.281798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.536 [2024-07-15 11:51:43.281819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.536 [2024-07-15 11:51:43.281833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.536 [2024-07-15 11:51:43.284826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.294179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.536 [2024-07-15 11:51:43.294657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.536 [2024-07-15 11:51:43.294695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.536 [2024-07-15 11:51:43.294710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.536 [2024-07-15 11:51:43.294952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.536 [2024-07-15 11:51:43.295176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.536 [2024-07-15 11:51:43.295196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.536 [2024-07-15 11:51:43.295208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.536 [2024-07-15 11:51:43.298184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.307446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.536 [2024-07-15 11:51:43.307867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.536 [2024-07-15 11:51:43.307895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.536 [2024-07-15 11:51:43.307924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.536 [2024-07-15 11:51:43.308137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.536 [2024-07-15 11:51:43.308335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.536 [2024-07-15 11:51:43.308358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.536 [2024-07-15 11:51:43.308372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.536 [2024-07-15 11:51:43.311362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.320624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.536 [2024-07-15 11:51:43.321124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.536 [2024-07-15 11:51:43.321149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.536 [2024-07-15 11:51:43.321177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.536 [2024-07-15 11:51:43.321372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.536 [2024-07-15 11:51:43.321570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.536 [2024-07-15 11:51:43.321590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.536 [2024-07-15 11:51:43.321602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.536 [2024-07-15 11:51:43.324616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.536 [2024-07-15 11:51:43.333906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.334378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.334417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.334431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.334625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.334871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.334892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.334906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.337900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.347179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.347622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.347646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.347674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.347918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.348155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.348174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.348187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.351159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.360408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.360867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.360908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.360924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.361138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.361336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.361355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.361367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.364353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.373700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.374143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.374182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.374196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.374404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.374602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.374621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.374633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.377637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.386966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.387457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.387495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.387510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.387704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.387930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.387951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.387964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.390942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.400281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.400753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.400779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.400808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.401013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.401227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.401246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.401259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.404277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.413616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.414102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.414126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.414155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.414350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.414548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.414567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.414580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.417591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.426883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.427358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.427397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.427412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.427606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.427851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.427872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.427885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.431104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.440303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.440752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.440778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.440792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.440987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.441185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.441205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.441222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.444284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.453519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.453993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.454032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.454047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.454258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.454457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.454477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.454489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.457490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.466792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.467273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.467311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.467326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.467520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.467717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.467744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.467773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.470774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.480016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.480475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.480501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.480515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.480709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.480941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.537 [2024-07-15 11:51:43.480962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.537 [2024-07-15 11:51:43.480976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.537 [2024-07-15 11:51:43.484097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.537 [2024-07-15 11:51:43.493270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.537 [2024-07-15 11:51:43.493736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.537 [2024-07-15 11:51:43.493792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.537 [2024-07-15 11:51:43.493808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.537 [2024-07-15 11:51:43.494037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.537 [2024-07-15 11:51:43.494295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.538 [2024-07-15 11:51:43.494316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.538 [2024-07-15 11:51:43.494330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.538 [2024-07-15 11:51:43.497536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.538 [2024-07-15 11:51:43.506709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.538 [2024-07-15 11:51:43.507144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.538 [2024-07-15 11:51:43.507184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.538 [2024-07-15 11:51:43.507198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.538 [2024-07-15 11:51:43.507406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.538 [2024-07-15 11:51:43.507604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.538 [2024-07-15 11:51:43.507623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.538 [2024-07-15 11:51:43.507635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.538 [2024-07-15 11:51:43.510682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.538 [2024-07-15 11:51:43.520135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.538 [2024-07-15 11:51:43.520577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.538 [2024-07-15 11:51:43.520605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.538 [2024-07-15 11:51:43.520620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.538 [2024-07-15 11:51:43.520844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.538 [2024-07-15 11:51:43.521078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.538 [2024-07-15 11:51:43.521099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.538 [2024-07-15 11:51:43.521112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.819 [2024-07-15 11:51:43.524370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.819 [2024-07-15 11:51:43.533326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.819 [2024-07-15 11:51:43.533793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.819 [2024-07-15 11:51:43.533818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.533833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.534047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.534266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.534286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.534298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.537312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.546586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.547016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.547042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.547072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.547282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.547481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.547500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.547512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.550526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.559826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.560301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.560325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.560354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.560549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.560773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.560809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.560822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.563817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.573168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.573616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.573654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.573669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.573898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.574138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.574157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.574170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.577147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.586478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.586838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.586879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.586894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.587102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.587300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.587320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.587332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.590346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.599705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.600109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.600135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.600150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.600344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.600542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.600561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.600574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.603586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.613035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.613386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.613411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.613425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.613619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.613862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.613884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.613897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.820 [2024-07-15 11:51:43.616904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.820 [2024-07-15 11:51:43.626255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.820 [2024-07-15 11:51:43.626703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.820 [2024-07-15 11:51:43.626727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.820 [2024-07-15 11:51:43.626767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.820 [2024-07-15 11:51:43.626984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.820 [2024-07-15 11:51:43.627201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.820 [2024-07-15 11:51:43.627221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.820 [2024-07-15 11:51:43.627233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.630249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.639615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.639989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.640030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.640046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.640239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.640438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.640457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.640469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.643478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.652938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.653303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.653329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.653343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.653537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.653759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.653779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.653792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.656807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.666246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.666607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.666647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.666661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.666901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.667144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.667169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.667183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.670160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.679495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.679893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.679918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.679933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.680160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.680359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.680378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.680391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.683394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.692744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.693189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.693237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.693252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.693445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.693643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.693662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.693675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.696677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.706061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.706468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.706493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.706507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.706716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.706944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.706964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.706977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.710029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.719457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.719861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.719887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.719901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.720130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.720329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.720349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.720362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.723409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.732861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.733295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.733334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.733348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.733566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.733794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.733816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.733829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.737061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.746155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.746552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.746579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.746609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.746840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.747052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.747072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.747085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.750379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.759617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.760062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.760116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.760329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.760527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.760547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.760559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.821 [2024-07-15 11:51:43.763643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.821 [2024-07-15 11:51:43.773054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.821 [2024-07-15 11:51:43.773385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.821 [2024-07-15 11:51:43.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.821 [2024-07-15 11:51:43.773425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.821 [2024-07-15 11:51:43.773619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.821 [2024-07-15 11:51:43.773849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.821 [2024-07-15 11:51:43.773871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.821 [2024-07-15 11:51:43.773884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.822 [2024-07-15 11:51:43.776918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.822 [2024-07-15 11:51:43.786339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.822 [2024-07-15 11:51:43.786693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-15 11:51:43.786719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.822 [2024-07-15 11:51:43.786733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.822 [2024-07-15 11:51:43.786955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.822 [2024-07-15 11:51:43.787172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.822 [2024-07-15 11:51:43.787192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.822 [2024-07-15 11:51:43.787204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.822 [2024-07-15 11:51:43.790218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.822 [2024-07-15 11:51:43.799551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.822 [2024-07-15 11:51:43.799912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.822 [2024-07-15 11:51:43.799940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:35.822 [2024-07-15 11:51:43.799955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:35.822 [2024-07-15 11:51:43.800186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:35.822 [2024-07-15 11:51:43.800384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:35.822 [2024-07-15 11:51:43.800404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:35.822 [2024-07-15 11:51:43.800421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.822 [2024-07-15 11:51:43.803786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.813307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.081 [2024-07-15 11:51:43.813723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 11:51:43.813754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.081 [2024-07-15 11:51:43.813783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.081 [2024-07-15 11:51:43.813984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.081 [2024-07-15 11:51:43.814200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.081 [2024-07-15 11:51:43.814219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.081 [2024-07-15 11:51:43.814231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.081 [2024-07-15 11:51:43.817287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.826546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.081 [2024-07-15 11:51:43.826914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 11:51:43.826955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.081 [2024-07-15 11:51:43.826971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.081 [2024-07-15 11:51:43.827216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.081 [2024-07-15 11:51:43.827415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.081 [2024-07-15 11:51:43.827434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.081 [2024-07-15 11:51:43.827447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.081 [2024-07-15 11:51:43.830461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.839943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.081 [2024-07-15 11:51:43.840328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 11:51:43.840368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.081 [2024-07-15 11:51:43.840383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.081 [2024-07-15 11:51:43.840590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.081 [2024-07-15 11:51:43.840818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.081 [2024-07-15 11:51:43.840838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.081 [2024-07-15 11:51:43.840851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.081 [2024-07-15 11:51:43.843822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.853265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.081 [2024-07-15 11:51:43.853672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 11:51:43.853697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.081 [2024-07-15 11:51:43.853711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.081 [2024-07-15 11:51:43.853951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.081 [2024-07-15 11:51:43.854190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.081 [2024-07-15 11:51:43.854210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.081 [2024-07-15 11:51:43.854222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.081 [2024-07-15 11:51:43.857237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.866543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.081 [2024-07-15 11:51:43.866975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 11:51:43.867001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.081 [2024-07-15 11:51:43.867039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.081 [2024-07-15 11:51:43.867266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.081 [2024-07-15 11:51:43.867464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.081 [2024-07-15 11:51:43.867483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.081 [2024-07-15 11:51:43.867496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.081 [2024-07-15 11:51:43.870473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.081 [2024-07-15 11:51:43.879847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.880327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.880366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.880381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.880593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.880818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.880838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.880851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.883949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.893154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.893604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.893630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.893644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.893882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.894099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.894135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.894148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.897119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.906398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.906857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.906882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.906912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.907125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.907323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.907343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.907355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.910347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.919597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.920100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.920139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.920155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.920349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.920547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.920566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.920578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.923592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.932848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.933325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.933349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.933378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.933572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.933797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.933832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.933846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.936855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.946142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.946610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.946634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.946665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.946906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.947134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.947154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.947166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.950155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.959445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.959904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.959954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.959970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.960182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.960380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.960400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.960412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.963398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.972690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.973169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.973193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.973221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.973415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.973613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.973632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.973645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.976651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.985977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.986427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.986465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.986484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:43.986680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:43.986926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:43.986948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:43.986961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:43.989952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:43.999252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:43.999713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:43.999744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:43.999776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:44.000004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:44.000255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:44.000275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:44.000288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:44.003521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.082 [2024-07-15 11:51:44.012603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.082 [2024-07-15 11:51:44.013083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 11:51:44.013122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.082 [2024-07-15 11:51:44.013137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.082 [2024-07-15 11:51:44.013332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.082 [2024-07-15 11:51:44.013530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.082 [2024-07-15 11:51:44.013550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.082 [2024-07-15 11:51:44.013562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.082 [2024-07-15 11:51:44.016602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.083 [2024-07-15 11:51:44.025872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.083 [2024-07-15 11:51:44.026341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 11:51:44.026380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.083 [2024-07-15 11:51:44.026394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.083 [2024-07-15 11:51:44.026588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.083 [2024-07-15 11:51:44.026828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.083 [2024-07-15 11:51:44.026854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.083 [2024-07-15 11:51:44.026868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.083 [2024-07-15 11:51:44.029869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.083 [2024-07-15 11:51:44.039166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.083 [2024-07-15 11:51:44.039562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 11:51:44.039586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.083 [2024-07-15 11:51:44.039600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.083 [2024-07-15 11:51:44.039857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.083 [2024-07-15 11:51:44.040083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.083 [2024-07-15 11:51:44.040118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.083 [2024-07-15 11:51:44.040131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.083 [2024-07-15 11:51:44.043107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.083 [2024-07-15 11:51:44.052357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.083 [2024-07-15 11:51:44.052812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 11:51:44.052837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.083 [2024-07-15 11:51:44.052866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.083 [2024-07-15 11:51:44.053080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.083 [2024-07-15 11:51:44.053279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.083 [2024-07-15 11:51:44.053298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.083 [2024-07-15 11:51:44.053310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.083 [2024-07-15 11:51:44.056333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.083 [2024-07-15 11:51:44.066197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.066684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.066725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.066748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.066978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.067220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.067239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.067252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.070294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.079456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.079912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.079952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.079968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.080180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.080379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.080398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.080410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.083449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.092759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.093202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.093241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.093255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.093450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.093648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.093667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.093679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.096676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.106056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.106422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.106461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.106475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.106696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.106921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.106942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.106955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.109936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.119307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.119764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.119813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.119827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.120060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.120253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.120272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.120284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.123238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.132496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.132980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.133040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.133054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.133242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.133434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.133453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.133465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.136339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.343 [2024-07-15 11:51:44.145491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.343 [2024-07-15 11:51:44.145930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.343 [2024-07-15 11:51:44.145967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.343 [2024-07-15 11:51:44.145981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.343 [2024-07-15 11:51:44.146170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.343 [2024-07-15 11:51:44.146362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.343 [2024-07-15 11:51:44.146381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.343 [2024-07-15 11:51:44.146393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.343 [2024-07-15 11:51:44.149210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.158561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.158987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.159011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.159039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.159228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.159420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.159439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.159455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.162274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.171589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.172045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.172069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.172097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.172285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.172477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.172496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.172508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.175321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.184632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.185052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.185108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.185122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.185324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.185517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.185536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.185547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.188479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.197724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.198195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.198234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.198249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.198437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.198630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.198649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.198661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.201580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.210893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.211330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.211353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.211381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.211569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.211787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.211807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.211820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.214611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.224002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.224453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.224500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.224513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.224715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.224956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.224977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.224989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.227892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.236971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.237433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.237484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.237497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.237699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.237939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.237960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.237973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.240865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.249945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.250438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.250486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.250500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.344 [2024-07-15 11:51:44.250727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.344 [2024-07-15 11:51:44.250969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.344 [2024-07-15 11:51:44.250989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.344 [2024-07-15 11:51:44.251003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.344 [2024-07-15 11:51:44.254161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.344 [2024-07-15 11:51:44.263188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.344 [2024-07-15 11:51:44.263600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.344 [2024-07-15 11:51:44.263624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.344 [2024-07-15 11:51:44.263651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.345 [2024-07-15 11:51:44.263891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.345 [2024-07-15 11:51:44.264128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.345 [2024-07-15 11:51:44.264147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.345 [2024-07-15 11:51:44.264160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.345 [2024-07-15 11:51:44.267168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.345 [2024-07-15 11:51:44.276294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.345 [2024-07-15 11:51:44.276767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.345 [2024-07-15 11:51:44.276810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.345 [2024-07-15 11:51:44.276824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.345 [2024-07-15 11:51:44.277046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.345 [2024-07-15 11:51:44.277239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.345 [2024-07-15 11:51:44.277258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.345 [2024-07-15 11:51:44.277270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.345 [2024-07-15 11:51:44.280122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.345 [2024-07-15 11:51:44.289319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.345 [2024-07-15 11:51:44.289788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.345 [2024-07-15 11:51:44.289812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.345 [2024-07-15 11:51:44.289826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.345 [2024-07-15 11:51:44.290028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.345 [2024-07-15 11:51:44.290220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.345 [2024-07-15 11:51:44.290239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.345 [2024-07-15 11:51:44.290251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.345 [2024-07-15 11:51:44.293070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.345 [2024-07-15 11:51:44.302426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.345 [2024-07-15 11:51:44.302878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.345 [2024-07-15 11:51:44.302918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.345 [2024-07-15 11:51:44.302932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.345 [2024-07-15 11:51:44.303121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.345 [2024-07-15 11:51:44.303313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.345 [2024-07-15 11:51:44.303332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.345 [2024-07-15 11:51:44.303344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.345 [2024-07-15 11:51:44.306245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.345 [2024-07-15 11:51:44.315565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.345 [2024-07-15 11:51:44.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.345 [2024-07-15 11:51:44.316040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.345 [2024-07-15 11:51:44.316054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.345 [2024-07-15 11:51:44.316243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.345 [2024-07-15 11:51:44.316435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.345 [2024-07-15 11:51:44.316454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.345 [2024-07-15 11:51:44.316466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.345 [2024-07-15 11:51:44.319383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.329271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.329709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.329755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.329769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.329977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.330187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.330206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.330219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.333171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.342312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.342749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.342788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.342806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.343016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.343224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.343243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.343255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.346156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.355419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.355870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.355909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.355923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.356112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.356304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.356322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.356334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.359272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.368548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.369004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.369028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.369056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.369245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.369437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.369456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.369468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.372324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.381640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.382075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.382105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.382119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.382307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.382499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.382522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.382535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.385451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.394729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.395123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.395177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.395191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.395393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.395585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.395604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.395616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.398433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.407753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.408222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.408236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.408456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.408649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.408667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.408679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.411745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.420888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.421324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.421362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.421376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.421565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.421782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.421801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.421829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.424712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.433900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.434318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.434364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.434378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.434580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.434798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.434819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.434832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.437620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.446937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.447378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.447402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.447430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 [2024-07-15 11:51:44.447618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.604 [2024-07-15 11:51:44.447839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.604 [2024-07-15 11:51:44.447859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.604 [2024-07-15 11:51:44.447872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.604 [2024-07-15 11:51:44.450659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.604 [2024-07-15 11:51:44.460206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.604 [2024-07-15 11:51:44.460646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.604 [2024-07-15 11:51:44.460671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.604 [2024-07-15 11:51:44.460700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3128535 Killed "${NVMF_APP[@]}" "$@" 00:25:36.604 [2024-07-15 11:51:44.460945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:36.605 [2024-07-15 11:51:44.461182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.461202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.461215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.605 [2024-07-15 11:51:44.464188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3129489 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3129489 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3129489 ']' 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.605 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.605 [2024-07-15 11:51:44.473509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.473916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.473965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.473980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.474219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.474412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.474431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.474443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.477416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.486906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.487322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.487372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.487386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.487574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.487798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.487819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.487833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.490787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.500215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.500647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.500860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.501066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.501086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.501099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.504271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.513396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.513800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.513826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.513841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.514056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.514265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.514284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.514296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.517232] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:36.605 [2024-07-15 11:51:44.517302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.605 [2024-07-15 11:51:44.517329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.526732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.527181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.527238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.527252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.527454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.527647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.527666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.527678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.530706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.540005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.540448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.540498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.540512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.540715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.540947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.540968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.540980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.543910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.605 [2024-07-15 11:51:44.553144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.553521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.553560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.553573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.553816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.554034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.554069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.554081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.557124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.566412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.566765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.566791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.566805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.566999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.567197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.567217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.567229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.605 [2024-07-15 11:51:44.570168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.605 [2024-07-15 11:51:44.579592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.605 [2024-07-15 11:51:44.580014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.605 [2024-07-15 11:51:44.580040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.605 [2024-07-15 11:51:44.580055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.605 [2024-07-15 11:51:44.580253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.605 [2024-07-15 11:51:44.580451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.605 [2024-07-15 11:51:44.580470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.605 [2024-07-15 11:51:44.580487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.606 [2024-07-15 11:51:44.582983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.606 [2024-07-15 11:51:44.583468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.864 [2024-07-15 11:51:44.593066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.593612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.593660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.593679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.593961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.594205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.594227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.594244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.597431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.606393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.606856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.606914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.607111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.607309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.607330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.607343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.610366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.619583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.619997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.620022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.620037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.620241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.620440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.620459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.620472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.623453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.632885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.633322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.633361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.633570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.633795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.633817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.633830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.636859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.646167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.646659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.646696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.646730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.646957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.647189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.647211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.647228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.650211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.659582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.659959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.660001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.660018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.660238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.660437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.660456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.660469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.663531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.672901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.673326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.673350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.673364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.673573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.673815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.673838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.673852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.676933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.686299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.686697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.686722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.686765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.686975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.687194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.687214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.687228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.690208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.691222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.865 [2024-07-15 11:51:44.691253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.865 [2024-07-15 11:51:44.691283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.865 [2024-07-15 11:51:44.691294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.865 [2024-07-15 11:51:44.691303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.865 [2024-07-15 11:51:44.691542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.865 [2024-07-15 11:51:44.691600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.865 [2024-07-15 11:51:44.691603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.865 [2024-07-15 11:51:44.699713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.700265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.700313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.700332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.700554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.700798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.700820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.700838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.704006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.713184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.713776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.713836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.713858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.714104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.714322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.714344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.714362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.717522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.726653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.727228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.727264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.727300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.727519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.727765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.727800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.727819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.730985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.740185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.740709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.740753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.740789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.741013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.741246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.741267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.741285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.744492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.753606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.754083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.754119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.754153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.754388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.754622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.754644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.865 [2024-07-15 11:51:44.754661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.865 [2024-07-15 11:51:44.757928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.865 [2024-07-15 11:51:44.767178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.865 [2024-07-15 11:51:44.767731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.865 [2024-07-15 11:51:44.767790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.865 [2024-07-15 11:51:44.767810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.865 [2024-07-15 11:51:44.768044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.865 [2024-07-15 11:51:44.768276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.865 [2024-07-15 11:51:44.768297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.768314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 [2024-07-15 11:51:44.771529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 [2024-07-15 11:51:44.780819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 [2024-07-15 11:51:44.781235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.781261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.781291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 [2024-07-15 11:51:44.781499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.866 [2024-07-15 11:51:44.781711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.781756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.781771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 [2024-07-15 11:51:44.784943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 [2024-07-15 11:51:44.794362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 [2024-07-15 11:51:44.794821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.794849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.794866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 [2024-07-15 11:51:44.795081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.866 [2024-07-15 11:51:44.795307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.795329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.795343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 [2024-07-15 11:51:44.798612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 [2024-07-15 11:51:44.808010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.866 [2024-07-15 11:51:44.808431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.808458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.808473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.866 [2024-07-15 11:51:44.808711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): B 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.866 ad file descriptor 00:25:36.866 [2024-07-15 11:51:44.808949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.808972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.808986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 [2024-07-15 11:51:44.812281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 [2024-07-15 11:51:44.821484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 [2024-07-15 11:51:44.821848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.821877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.821893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 [2024-07-15 11:51:44.822121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.866 [2024-07-15 11:51:44.822333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.822355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.822368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 [2024-07-15 11:51:44.825614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.866 [2024-07-15 11:51:44.830527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.866 [2024-07-15 11:51:44.834997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 [2024-07-15 11:51:44.835340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.835367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.835390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 [2024-07-15 11:51:44.835619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.866 [2024-07-15 11:51:44.835852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.835874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.835888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.866 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.866 [2024-07-15 11:51:44.839160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.866 [2024-07-15 11:51:44.848619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.866 [2024-07-15 11:51:44.849003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.866 [2024-07-15 11:51:44.849031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:36.866 [2024-07-15 11:51:44.849049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:36.866 [2024-07-15 11:51:44.849288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:36.866 [2024-07-15 11:51:44.849493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.866 [2024-07-15 11:51:44.849513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.866 [2024-07-15 11:51:44.849526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.123 [2024-07-15 11:51:44.852824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.124 [2024-07-15 11:51:44.862167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.124 [2024-07-15 11:51:44.862546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.124 [2024-07-15 11:51:44.862588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:37.124 [2024-07-15 11:51:44.862612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:37.124 [2024-07-15 11:51:44.862892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:37.124 [2024-07-15 11:51:44.863136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.124 [2024-07-15 11:51:44.863159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.124 [2024-07-15 11:51:44.863172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.124 [2024-07-15 11:51:44.866392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.124 [2024-07-15 11:51:44.875644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.124 [2024-07-15 11:51:44.876156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.124 [2024-07-15 11:51:44.876195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:37.124 [2024-07-15 11:51:44.876215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:37.124 [2024-07-15 11:51:44.876448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:37.124 [2024-07-15 11:51:44.876691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.124 [2024-07-15 11:51:44.876713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.124 [2024-07-15 11:51:44.876731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.124 Malloc0 00:25:37.124 [2024-07-15 11:51:44.880026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.124 [2024-07-15 11:51:44.889313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.124 [2024-07-15 11:51:44.889728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.124 [2024-07-15 11:51:44.889762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf7540 with addr=10.0.0.2, port=4420 00:25:37.124 [2024-07-15 11:51:44.889793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7540 is same with the state(5) to be set 00:25:37.124 [2024-07-15 11:51:44.890007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf7540 (9): Bad file descriptor 00:25:37.124 [2024-07-15 11:51:44.890246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.124 [2024-07-15 11:51:44.890267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.124 [2024-07-15 11:51:44.890280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.124 [2024-07-15 11:51:44.893559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.124 [2024-07-15 11:51:44.899584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.124 [2024-07-15 11:51:44.902925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.124 11:51:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3128821 00:25:37.124 [2024-07-15 11:51:44.948090] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.089 00:25:47.089 Latency(us) 00:25:47.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.089 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.089 Verification LBA range: start 0x0 length 0x4000 00:25:47.089 Nvme1n1 : 15.01 6875.48 26.86 10238.78 0.00 7457.14 555.24 19903.53 00:25:47.089 =================================================================================================================== 00:25:47.089 Total : 6875.48 26.86 10238.78 0.00 7457.14 555.24 19903.53 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.089 rmmod nvme_tcp 00:25:47.089 rmmod nvme_fabrics 00:25:47.089 rmmod nvme_keyring 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3129489 ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3129489 ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3129489' 00:25:47.089 killing process with pid 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3129489 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.089 11:51:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.988 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.988 00:25:48.988 real 0m22.649s 00:25:48.988 user 1m0.507s 00:25:48.988 sys 0m4.518s 00:25:48.988 11:51:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.988 11:51:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:48.988 ************************************ 00:25:48.988 END TEST nvmf_bdevperf 00:25:48.988 ************************************ 00:25:48.988 11:51:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:48.988 11:51:56 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:48.988 11:51:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:48.988 11:51:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.988 11:51:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.988 ************************************ 00:25:48.988 START TEST nvmf_target_disconnect 00:25:48.988 ************************************ 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:48.988 * Looking for test storage... 00:25:48.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:48.988 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.989 11:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:51.519 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:51.519 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:51.519 Found net devices under 0000:84:00.0: cvl_0_0 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:51.519 Found net devices under 0000:84:00.1: cvl_0_1 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.519 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.520 11:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:25:51.520 00:25:51.520 --- 10.0.0.2 ping statistics --- 00:25:51.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.520 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:51.520 00:25:51.520 --- 10.0.0.1 ping statistics --- 00:25:51.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.520 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:51.520 ************************************ 00:25:51.520 START TEST nvmf_target_disconnect_tc1 00:25:51.520 ************************************ 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.520 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.520 [2024-07-15 11:51:59.171148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.520 [2024-07-15 11:51:59.171216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dff790 with addr=10.0.0.2, port=4420 00:25:51.520 [2024-07-15 11:51:59.171258] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:51.520 [2024-07-15 11:51:59.171280] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:51.520 [2024-07-15 11:51:59.171292] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:51.520 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:51.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:51.520 Initializing NVMe Controllers 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:51.520 00:25:51.520 real 0m0.088s 00:25:51.520 user 0m0.034s 00:25:51.520 sys 0m0.054s 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:51.520 ************************************ 00:25:51.520 END TEST nvmf_target_disconnect_tc1 00:25:51.520 ************************************ 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:51.520 ************************************ 00:25:51.520 START TEST nvmf_target_disconnect_tc2 00:25:51.520 ************************************ 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3132657 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3132657 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3132657 ']' 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.520 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.520 [2024-07-15 11:51:59.281217] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:51.520 [2024-07-15 11:51:59.281303] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.520 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.520 [2024-07-15 11:51:59.345155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.520 [2024-07-15 11:51:59.455449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.520 [2024-07-15 11:51:59.455519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.520 [2024-07-15 11:51:59.455547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.520 [2024-07-15 11:51:59.455559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.520 [2024-07-15 11:51:59.455568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.520 [2024-07-15 11:51:59.455626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:51.521 [2024-07-15 11:51:59.455683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:51.521 [2024-07-15 11:51:59.455784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:51.521 [2024-07-15 11:51:59.455787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 Malloc0 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 [2024-07-15 11:51:59.632660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 [2024-07-15 11:51:59.660932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3132742 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:51.779 11:51:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.779 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.326 11:52:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3132657 00:25:54.326 11:52:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 [2024-07-15 11:52:01.685160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 Write completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.326 [2024-07-15 11:52:01.685486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:54.326 Read completed with error (sct=0, sc=8) 00:25:54.326 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 [2024-07-15 11:52:01.685849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Write completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 Read completed with error (sct=0, sc=8) 00:25:54.327 starting I/O failed 00:25:54.327 [2024-07-15 11:52:01.686185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.327 [2024-07-15 11:52:01.686348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.686386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.686590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.686638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.686805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.686837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.686941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.686966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.687912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.687938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.688958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.688983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.689166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.689189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.689373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.327 [2024-07-15 11:52:01.689395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.327 qpair failed and we were unable to recover it. 00:25:54.327 [2024-07-15 11:52:01.689542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.689565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.689670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.689693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.689815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.689841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.689979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.690131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.690306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.690491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.690652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.690854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.690895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.691969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.691994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.692948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.692973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.693115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.693138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.693280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.693507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.693562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.693691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.693722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.693879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.693903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.694883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.694908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.695869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.695894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.696041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.696064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.696204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.696228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.328 [2024-07-15 11:52:01.696366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.328 [2024-07-15 11:52:01.696389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.328 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.696554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.696577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.696697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.696736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.696853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.696878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.696977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.697147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.697370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.697616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.697785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.697907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.697932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.698934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.698958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.699107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.699145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.699387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.699430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.699603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.699628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.699757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.699782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.699872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.699896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.700063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.700301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.700492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.700703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.700876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.700998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.701195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.701327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.701566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.701816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.701970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.701995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.702201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.702227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.702419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.702465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.702613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.702637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.702799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.702825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.702931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.702956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.703055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.703088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.703257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.703281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.703427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.703452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.703584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.329 [2024-07-15 11:52:01.703608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.329 qpair failed and we were unable to recover it. 00:25:54.329 [2024-07-15 11:52:01.703828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.703867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.703993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.704274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.704473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.704658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.704806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.704955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.704980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.705162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.705185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.705376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.705426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.705592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.705615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.705780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.705807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.705960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.705985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.706189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.706337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.706495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.706714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.706855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.706976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.707949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.707982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.708956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.708981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.709100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.709136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.709273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.709297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.330 qpair failed and we were unable to recover it. 00:25:54.330 [2024-07-15 11:52:01.709428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.330 [2024-07-15 11:52:01.709452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.709593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.709630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.709825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.709852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.709986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.710192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.710444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.710615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.710750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.710920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.710945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.711074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.711113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.711328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.711376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.711517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.711543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.711693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.711717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.711866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.711891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.712918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.712952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.713918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.713944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.714136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.714169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.714345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.714395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.714596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.714642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.714823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.714848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.714981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.715221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.715415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.715608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.715820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.715963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.715988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.716101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.716124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.716266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.716289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.716485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.716509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.716702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.716725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.716889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.331 [2024-07-15 11:52:01.716926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-07-15 11:52:01.717117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.717277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.717466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.717659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.717813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.717941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.717965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.718917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.718941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.719955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.719992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.720110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.720135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.720423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.720447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.720571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.720594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.720812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.720843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.721950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.722170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.722398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.722453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.722605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.722634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.722767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.722791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.722913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.722937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.723930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.723954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.724091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.724125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-07-15 11:52:01.724334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.332 [2024-07-15 11:52:01.724357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.724523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.724546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.724712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.724741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.724838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.724863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.724963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.724987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.725149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.725186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.725419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.725468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.725635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.725658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.725799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.725824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.725950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.725974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.726175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.726198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.726354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.726398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.726508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.726546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.726680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.726703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.726856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.726881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.727948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.727972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.728134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.728156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.728319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.728380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.728481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.728505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.728645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.728669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.728835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.728860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.729029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.729064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.729247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.729269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.729458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.729481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.729652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.729683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.729844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.729873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.730098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.730307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.730519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.730670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.730842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.730992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.731016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.731229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.731251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.731423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.731473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.731651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.731674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-07-15 11:52:01.731820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.333 [2024-07-15 11:52:01.731859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.731982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.732005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.732192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.732241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.732400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.732457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.732633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.732659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.732823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.732848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.733948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.733973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.734167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.734190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.734383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.734429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.734583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.734616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.734806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.734837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.734974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.734998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.735181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.735334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.735503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.735635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.735846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.735996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.736036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.736188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.736212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.736424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.736448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.736599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.736622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.736863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.736887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.737039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.737061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.737248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.737295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.737470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.737512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.737621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.737664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.737865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.737889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.738050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.738098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.738278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.738327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.738509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.738561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.738725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.738780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.738901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.738940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.739122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.739169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.739322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.739367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.739599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.739641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.739778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.739802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.334 [2024-07-15 11:52:01.739907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.334 [2024-07-15 11:52:01.739931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.334 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.740959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.740983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.741149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.741188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.741336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.741358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.741580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.741602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.741746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.741786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.741971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.741996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.742141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.742178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.742339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.742361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.742600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.742623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.742799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.742830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.742982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.743018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.743178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.743231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.743414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.743462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.743548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.743586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.743746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.743770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.743987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.744035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.744202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.744253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.744431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.744485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.744631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.744654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.744801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.744826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.744992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.745030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.745177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.745229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.745367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.745422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.745572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.745600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.745748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.745772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.745981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.746005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.746166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.746215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.746372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.746427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.746603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.746626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.335 [2024-07-15 11:52:01.746750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.335 [2024-07-15 11:52:01.746775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.335 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.746930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.746953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.747099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.747122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.747290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.747312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.747547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.747569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.747721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.747765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.747883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.747906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.748922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.748945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.749116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.749138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.749359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.749385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.749556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.749579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.749812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.749837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.749991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.750155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.750380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.750551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.750718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.750900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.750924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.751050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.751073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.751216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.751253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.751423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.751448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.751618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.751640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.751815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.752066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.752220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.752394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.752597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.752796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.752995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.753184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.753420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.753587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.753761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.753940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.753963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.754164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.754213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.336 [2024-07-15 11:52:01.754359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.336 [2024-07-15 11:52:01.754416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.336 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.754537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.754573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.754715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.754747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.754866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.754895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.755079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.755123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.755262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.755312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.755474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.755507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.755700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.755733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.756064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.756128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.756309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.756361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.756521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.756569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.756710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.756733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.756968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.756992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.757134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.757156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.757312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.757335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.757538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.757594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.757794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.757823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.757974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.757998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.758195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.758222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.758483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.758531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.758774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.758797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.758962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.759172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.759201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.759410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.759469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.759605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.759628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.759774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.759798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.759933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.759956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.760127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.760193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.760338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.760394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.760547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.760570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.760790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.760830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.760966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.760989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.761186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.761234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.761404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.761437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.761694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.761732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.761841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.761864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.762026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.762088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.762264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.762286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.762479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.762529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.762674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.762700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.762920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.762950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.337 [2024-07-15 11:52:01.763163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.337 [2024-07-15 11:52:01.763217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.337 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.763347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.763396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.763565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.763587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.763771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.763793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.763994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.764055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.764215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.764269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.764451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.764474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.764620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.764665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.764869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.764917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.765053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.765102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.765268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.765467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.765489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.765629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.765651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.765922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.765970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.766164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.766219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.766342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.766364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.766524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.766547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.766707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.766730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.766848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.767054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.767107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.767339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.767390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.767580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.767602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.767764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.767789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.767919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.767973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.768102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.768192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.768341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.768395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.768593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.768614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.768734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.768775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.768958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.769023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.769171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.769222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.769361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.769398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.769617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.769638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.769807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.769831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.770033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.770079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.770242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.770264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.770466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.770489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.770684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.770707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.770905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.770955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.771090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.771142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.771351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.771406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.771606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.771632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.338 qpair failed and we were unable to recover it. 00:25:54.338 [2024-07-15 11:52:01.771752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.338 [2024-07-15 11:52:01.771791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.771971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.772025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.772230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.772289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.772441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.772462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.772615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.772651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.772928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.772978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.773152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.773200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.773390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.773439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.773586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.773611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.773856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.773906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.774142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.774192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.774329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.774385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.774581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.774608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.774752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.774775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.774958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.775023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.775234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.775284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.775444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.775466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.775689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.775735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.775908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.775971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.776257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.776323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.776470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.776523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.776713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.776764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.776936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.776985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.777175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.777224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.777399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.777445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.777593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.777615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.777874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.777923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.778123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.778180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.778382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.778439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.778603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.778625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.778758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.778781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.778897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.778919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.779104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.779179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.779312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.779366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.779584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.339 [2024-07-15 11:52:01.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.339 qpair failed and we were unable to recover it. 00:25:54.339 [2024-07-15 11:52:01.779767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.779841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.780124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.780173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.780296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.780318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.780550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.780574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.780698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.780721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.781012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.781058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.781205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.781250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.781432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.781500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.781650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.781671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.781787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.781810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.782129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.782181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.782389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.782435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.782597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.782619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.782794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.782969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.783023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.783181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.783224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.783379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.783420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.783606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.783629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.783804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.783827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.784900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.784926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.785190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.785239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.785426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.785474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.785628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.785650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.785865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.785915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.786052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.786106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.786264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.786316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.786420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.786444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.786638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.786685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.786863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.786914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.787080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.787127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.787325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.787374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.787520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.787541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.787708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.787754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.787905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.787928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.788077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.788113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.788323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.340 [2024-07-15 11:52:01.788373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.340 qpair failed and we were unable to recover it. 00:25:54.340 [2024-07-15 11:52:01.788519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.788541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.788688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.788711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.788869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.788933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.789056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.789103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.789279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.789301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.789509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.789531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.789694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.789716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.789898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.789961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.790068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.790104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.790277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.790299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.790526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.790567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.790772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.790801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.790937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.790986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.791308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.791369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.791515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.791569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.791750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.791772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.791941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.791963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.792149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.792197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.792349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.792556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.792578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.792771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.792793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.792978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.793214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.793263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.793436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.793492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.793661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.793683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.793870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.793922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.794157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.794344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.794484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.794644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.794862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.794959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.795185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.795347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.795493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.795682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.795929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.795980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.796143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.796189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.796365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.796416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.796559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.796581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.341 [2024-07-15 11:52:01.796755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.341 [2024-07-15 11:52:01.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.341 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.796982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.797037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.797173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.797232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.797396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.797455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.797608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.797631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.797790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.797828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.797991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.798014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.798190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.798212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.798389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.798440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.798614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.798637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.798808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.798869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.799932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.799956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.800117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.800140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.800299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.800320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.800454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.800491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.800650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.800686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.800844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.800867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.801047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.801075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.801257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.801309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.801462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.801484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.801652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.801689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.801860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.801907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.802058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.802107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.802290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.802340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.802488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.802510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.802616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.802638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.802820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.802844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.803027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.803074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.803262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.803311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.803464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.803485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.803677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.803700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.803881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.803935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.804162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.804312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.804363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.804514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.804552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.804754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.804792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.342 [2024-07-15 11:52:01.804989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.342 [2024-07-15 11:52:01.805037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.342 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.805164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.805186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.805352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.805401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.805620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.805652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.805778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.805800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.805949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.805995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.806190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.806238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.806450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.806502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.806675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.806697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.806934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.806987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.807092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.807147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.807372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.807422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.807610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.807632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.807755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.807982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.808034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.808195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.808244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.808399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.808456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.808631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.808663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.808895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.808943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.809143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.809194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.809370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.809418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.809633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.809655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.809841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.809898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.810084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.810133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.810329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.810379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.810536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.810558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.810735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.810776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.810906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.810958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.811119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.811168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.811319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.811365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.811509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.811546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.811704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.811748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.811932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.811979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.812109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.812132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.812336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.812358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.343 [2024-07-15 11:52:01.812505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.343 [2024-07-15 11:52:01.812527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.343 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.812671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.812709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.812915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.812938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.813133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.813155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.813297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.813345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.813537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.813560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.813784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.813809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.813965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.814013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.814193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.814239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.814388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.814437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.814625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.814647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.814763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.814786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.815044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.815090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.815257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.815307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.815475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.815522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.815724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.815768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.815874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.815926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.816102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.816321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.816537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.816559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.816701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.816744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.816914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.816938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.817106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.817162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.817377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.817427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.817574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.817601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.817767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.818014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.818063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.818224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.818285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.818491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.818537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.818696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.818718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.818924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.818980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.819112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.819165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.819360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.819408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.819600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.819624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.819783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.819844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.820039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.820089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.820244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.820297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.820486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.820511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.820630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.820668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.820836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.820874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.821024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.344 [2024-07-15 11:52:01.821079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.344 qpair failed and we were unable to recover it. 00:25:54.344 [2024-07-15 11:52:01.821234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.821282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.821420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.821457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.821643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.821665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.821851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.821908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.822093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.822142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.822294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.822343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.822525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.822547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.822666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.822703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.822876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.822928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.823068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.823114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.823307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.823361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.823536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.823559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.823765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.823803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.823993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.824053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.824177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.824227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.824539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.824596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.824768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.824825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.824963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.825010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.825178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.825237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.825430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.825481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.825669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.825691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.825915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.825966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.826131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.826180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.826374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.826423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.826576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.826598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.826813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.826867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.827190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.827244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.827445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.827488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.827678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.827700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.827849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.827910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.828060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.828107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.828265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.828314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.828552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.828578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.828751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.828805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.828995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.829044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.829219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.829274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.829471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.829526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.829701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.829755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.829926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.829977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.830152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.830198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.345 [2024-07-15 11:52:01.830357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.345 [2024-07-15 11:52:01.830410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.345 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.830642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.830663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.830848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.830902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.831122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.831169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.831337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.831386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.831533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.831555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.831774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.831797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.831950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.831994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.832138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.832183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.832333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.832379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.832600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.832623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.832808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.832867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.833101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.833148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.833350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.833401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.833550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.833572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.833846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.833870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.834070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.834127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.834264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.834310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.834610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.834658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.834852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.834906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.835048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.835100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.835254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.835303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.835549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.835571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.835713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.835757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.835972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.835996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.836135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.836182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.836299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.836340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.836543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.836566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.836734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.836777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.836930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.836953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.837137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.837184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.837338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.837387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.837556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.837578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.837752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.837793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.837918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.837966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.838265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.838311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.838500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.838548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.838771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.838811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.839003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.839027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.839153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.839189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.839430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.839484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.346 [2024-07-15 11:52:01.839602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.346 [2024-07-15 11:52:01.839623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.346 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.839826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.839884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.840060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.840109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.840290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.840313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.840471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.840492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.840663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.840685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.840862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.840886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.841107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.841261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.841462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.841790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.841962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.842167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.842335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.842510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.842679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.842901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.842951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.843105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.843153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.843342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.843396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.843572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.843594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.843706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.843728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.843870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.843924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.844885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.844934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.845109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.845159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.845322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.845370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.845478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.845501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.845669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.845691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.845839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.845896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.846045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.846100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.846286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.846335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.846467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.846488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.846628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.846650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.846828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.347 [2024-07-15 11:52:01.846852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.347 qpair failed and we were unable to recover it. 00:25:54.347 [2024-07-15 11:52:01.847019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.847061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.847156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.847185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.847316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.847339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.847905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.847930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.848343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.848518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.848663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.848859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.848998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.849975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.849997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.850173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.850210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.850344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.850367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.850497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.850520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.850614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.850636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.850830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.850869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.851956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.851979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.852882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.852905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.853005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.853042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.853184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.853220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.348 [2024-07-15 11:52:01.853388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.348 [2024-07-15 11:52:01.853410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.348 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.853569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.853605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.853754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.853779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.853883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.853936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.854089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.854131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.854318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.854366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.854543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.854565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.854713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.854763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.854908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.854950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.855124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.855171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.855325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.855373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.855537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.855560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.855694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.855717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.855847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.855871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.856950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.856973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.857113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.857269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.857482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.857654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.857851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.857980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.858867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.858998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.859973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.859997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.860166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.860189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.860414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.860436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.860578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.349 [2024-07-15 11:52:01.860616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.349 qpair failed and we were unable to recover it. 00:25:54.349 [2024-07-15 11:52:01.860760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.860785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.860883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.860937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.861091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.861138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.861287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.861324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.861504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.861527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.861687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.861709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.861851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.861898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.862101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.862150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.862301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.862349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.862520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.862713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.862736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.862862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.862886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.863021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.863080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.863243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.863299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.863426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.863449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.863618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.863642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.863864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.863927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.864169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.864216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.864355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.864403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.864575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.864597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.864744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.864768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.864880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.864928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.865088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.865135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.865262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.865318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.865459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.865496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.865631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.865654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.865795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.865847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.866033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.866085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.866227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.866282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.866433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.866470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.866618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.866655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.866829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.866886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.867941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.867987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.868143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.868191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.868354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.868377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.868509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.350 [2024-07-15 11:52:01.868532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.350 qpair failed and we were unable to recover it. 00:25:54.350 [2024-07-15 11:52:01.868679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.868701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.868871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.868929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.869954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.869991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.870148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.870333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.870547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.870713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.870871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.870987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.871221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.871415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.871549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.871752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.871952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.871999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.872957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.873226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.873248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.873372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.873394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.873528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.873550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.873772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.873797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.873909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.873934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.874867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.874891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.875034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.875057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.875232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.875255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.875393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.875431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.351 [2024-07-15 11:52:01.875543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.351 [2024-07-15 11:52:01.875565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.351 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.875750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.875774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.875906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.875950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.876164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.876341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.876473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.876667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.876798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.876958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.877176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.877372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.877543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.877690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.877835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.877860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.878913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.878936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.879066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.879089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.879290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.879311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.879415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.879438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.879662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.879684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.879792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.879846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.880826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.880850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.881012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.881058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.881213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.881260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.881407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.881429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.881627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.881649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.352 [2024-07-15 11:52:01.881758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.352 [2024-07-15 11:52:01.881781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.352 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.881936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.881984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.882149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.882197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.882347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.882368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.882537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.882574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.882676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.882713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.882822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.882874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.883887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.883938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.884144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.884176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.884342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.884390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.884552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.884574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.884702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.884744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.884873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.884927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.885908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.885931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.886889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.886928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.887113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.887328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.887520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.887687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.887857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.887986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.353 [2024-07-15 11:52:01.888866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.353 qpair failed and we were unable to recover it. 00:25:54.353 [2024-07-15 11:52:01.888978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.889204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.889400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.889547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.889688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.889870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.889896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.890815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.890987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.891158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.891381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.891543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.891711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.891886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.891937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.892128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.892326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.892474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.892730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.892889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.892998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.893882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.893984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.894151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.894356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.894546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.894675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.894842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.894908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.354 [2024-07-15 11:52:01.895868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.354 [2024-07-15 11:52:01.895893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.354 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.896966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.896990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.897956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.897980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.898877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.898901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.899937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.899975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.900970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.900993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.901962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.901985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.902099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.902123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.902292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.902316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.355 [2024-07-15 11:52:01.902457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.355 [2024-07-15 11:52:01.902495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.355 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.902633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.902674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.902791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.902844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.902955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.903889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.903994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.904940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.904964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.905949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.906931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.906954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.356 [2024-07-15 11:52:01.907867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.356 [2024-07-15 11:52:01.907891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.356 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.907995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.908901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.908926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.909965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.909989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.910921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.910945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.911924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.911948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.912917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.912941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.913939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.913963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.914082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.357 [2024-07-15 11:52:01.914105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.357 qpair failed and we were unable to recover it. 00:25:54.357 [2024-07-15 11:52:01.914253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.914276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.914451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.914474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.914629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.914652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.914773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.914798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.914937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.914983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.915162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.915209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.915342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.915364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.915528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.915551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.915663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.915686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.915812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.915861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.916935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.916959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.917173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.917211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.917387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.917581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.917618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.917752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.917801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.917923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.917948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.918123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.918160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.918300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.918351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.918520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.918581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.918761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.918803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.918900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.918923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.919073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.919113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.919267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.919303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.919507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.919550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.919729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.919763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.919898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.919923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.920079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.920116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.920337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.920375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.920553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.920608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.920807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.920832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.920935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.920960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.921110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.921148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.921340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.921378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.921535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.921571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.358 [2024-07-15 11:52:01.921694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.358 [2024-07-15 11:52:01.921732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.358 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.921868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.921891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.922878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.922902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.923035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.923071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.923272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.923335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.923499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.923572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.923755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.923808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.923924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.923963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.924159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.924220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.924431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.924491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.924686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.924776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.924905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.924945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.925906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.925942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.926071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.926114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.926308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.926345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.926492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.926530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.926673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.926711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.926874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.926912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.927077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.927236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.927451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.927636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.927872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.927998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.928036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.928210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.928247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.928372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.928409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.928646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.928683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.928819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.928857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.928990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.929027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.929389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.929426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.359 [2024-07-15 11:52:01.929575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.359 [2024-07-15 11:52:01.929612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.359 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.929765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.929804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.929950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.930165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.930203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.930354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.930391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.930596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.930634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.930782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.930821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.930947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.930984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.931159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.931196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.931366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.931404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.931590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.931627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.931780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.931821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.931949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.931986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.932152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.932199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.932369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.932595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.932631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.932777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.932816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.932939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.932976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.933181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.933218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.933378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.933416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.933552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.933589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.933768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.933821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.933948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.933992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.934179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.934219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.934378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.934417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.934566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.934767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.934808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.934946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.934985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.935170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.935209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.935360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.935399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.935559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.935600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.935764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.935809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.935960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.936002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.936140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.936182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.936465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.360 [2024-07-15 11:52:01.936507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.360 qpair failed and we were unable to recover it. 00:25:54.360 [2024-07-15 11:52:01.936671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.936751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.936889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.936929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.937143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.937185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.937432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.937472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.939105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.939483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.939647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.939857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.939983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.940947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.940973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.941920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.941945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.942892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.942981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.943905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.943931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.944059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.944100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.944258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.361 [2024-07-15 11:52:01.944286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.361 qpair failed and we were unable to recover it. 00:25:54.361 [2024-07-15 11:52:01.944421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.944448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.944575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.944603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.944701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.944749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.944853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.944878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.944978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.945926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.945951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.946871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.946996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.947903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.947929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.948893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.948917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.949910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.949934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.950043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.950070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.950169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.362 [2024-07-15 11:52:01.950351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.362 [2024-07-15 11:52:01.950378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.362 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.950485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.950512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.950642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.950669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.950797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.950823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.950927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.950952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.951892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.951982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.952891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.952916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.953866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.953890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.954910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.954950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.955926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.955951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.956044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.956068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.956197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.363 [2024-07-15 11:52:01.956222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.363 qpair failed and we were unable to recover it. 00:25:54.363 [2024-07-15 11:52:01.956318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.956345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.956449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.956476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.956582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.956760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.956784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.956900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.956924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.957921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.957946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.958947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.958971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.959825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.959869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.960937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.960964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.961886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.961991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.962015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.962149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.962176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.962326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.364 [2024-07-15 11:52:01.962351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.364 qpair failed and we were unable to recover it. 00:25:54.364 [2024-07-15 11:52:01.962455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.962480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.962610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.962635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.962730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.962760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.962853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.962878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.962968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.962993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.963865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.963906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.964843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.964975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.965944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.965970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.966951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.966996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.967116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.967141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.967275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.967313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.967444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.967468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.365 [2024-07-15 11:52:01.967558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.365 [2024-07-15 11:52:01.967581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.365 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.967744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.967777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.967898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.967923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.968884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.968909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.969838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.969976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.970871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.970999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.971904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.971928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.972837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.972863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.366 qpair failed and we were unable to recover it. 00:25:54.366 [2024-07-15 11:52:01.973682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.366 [2024-07-15 11:52:01.973707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.973866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.973894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.973993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.974900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.974926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.975871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.975975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.976947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.976988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.977878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.977904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.978709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.978733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.979918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.979959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.367 qpair failed and we were unable to recover it. 00:25:54.367 [2024-07-15 11:52:01.980912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.367 [2024-07-15 11:52:01.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.981939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.981969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.982866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.983975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.983999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.984094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.984209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.984233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.984433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.984458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.984550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.984585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.985278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.985308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.985448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.985474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.986239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.986272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.986439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.986469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.986613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.986643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.986768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.986809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.986936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.986961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.987897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.987922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.988052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.988080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.988292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.988331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.988549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.988577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.368 [2024-07-15 11:52:01.988706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.368 [2024-07-15 11:52:01.988735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.368 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.988896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.988920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.989894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.989919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.990069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.990212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.990400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.990577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.990829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.990977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.991147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.991392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.991565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.991725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.991880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.991905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.992936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.992961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.993944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.993970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.994102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.994145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.994358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.994391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.994527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.994556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.994702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.369 [2024-07-15 11:52:01.994727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.369 qpair failed and we were unable to recover it. 00:25:54.369 [2024-07-15 11:52:01.994874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.994900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.995926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.995952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.996881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.997966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.997991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.998898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.998924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.999050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.999079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.999218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.370 [2024-07-15 11:52:01.999243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.370 qpair failed and we were unable to recover it. 00:25:54.370 [2024-07-15 11:52:01.999389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:01.999431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:01.999562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:01.999591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:01.999767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:01.999811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.000930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.000955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.001915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.001941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.002921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.002947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.003147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.003324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.003491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.003654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.003885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.003980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.004204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.004394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.004546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.004689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.004831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.004856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.005903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.005927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.371 [2024-07-15 11:52:02.006125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.371 [2024-07-15 11:52:02.006149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.371 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.006272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.006300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.006432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.006461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.006567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.006596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.006707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.006752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.006884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.006909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.007880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.007905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.008948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.008973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.009962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.009987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.010941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.010966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.011881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.011910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.372 [2024-07-15 11:52:02.012044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.372 [2024-07-15 11:52:02.012073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.372 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.012957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.012986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.013140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.013169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.013302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.013331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.013483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.013511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.013653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.013681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.013829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.014849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.014977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.015916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.015945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.016902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.016931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.017903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.018035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.373 [2024-07-15 11:52:02.018064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.373 qpair failed and we were unable to recover it. 00:25:54.373 [2024-07-15 11:52:02.018189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.018218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.018351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.018380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.018516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.018552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.018712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.018762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.018864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.018893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.018994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.019895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.019988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.020967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.020996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.021946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.021975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.022875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.022904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.374 qpair failed and we were unable to recover it. 00:25:54.374 [2024-07-15 11:52:02.023038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.374 [2024-07-15 11:52:02.023066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.023172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.023200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.023405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.023434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.023589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.023618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.023728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.023765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.023891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.023920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.024923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.024952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.025949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.025978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.026890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.026918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.027909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.027938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.028904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.028933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.029039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.375 [2024-07-15 11:52:02.029067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.375 qpair failed and we were unable to recover it. 00:25:54.375 [2024-07-15 11:52:02.029200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.029358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.029501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.029661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.029819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.029964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.029992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.030892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.030920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.031884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.031913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.032943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.032972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.033852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.033880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.034931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.034960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.035098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.035153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.035273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.376 [2024-07-15 11:52:02.035302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.376 qpair failed and we were unable to recover it. 00:25:54.376 [2024-07-15 11:52:02.035506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.035534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.035635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.035663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.035800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.035829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.035927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.035955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.036819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.036848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.037918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.038943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.038971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.039126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.039251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.039437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.039559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.039773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.039987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.040171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.040366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.040573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.040701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.040868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.040914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.041015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.041043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.041182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.041211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.041343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.041371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-15 11:52:02.041494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.377 [2024-07-15 11:52:02.041522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.041680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.041709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.041841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.041869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.041969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.042939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.042967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.043948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.043978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.044107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.044135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.044347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.044376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.044509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.044537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.044751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.044780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.044986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.045151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.045347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.045559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.045713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.045870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.045916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.046951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.046979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.047107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.047135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.047265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.047293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.047453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.047482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.047639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.047668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-15 11:52:02.047795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.378 [2024-07-15 11:52:02.047824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.047983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.048145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.048314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.048569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.048751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.048937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.048986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.049157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.049189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.049342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.049375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.049532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.049560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.049714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.049747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.049859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.049891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.050933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.050962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.051844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.051873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.052948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.052975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.053079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.053107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.053242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.053271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.053399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.053427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.053531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.379 [2024-07-15 11:52:02.053560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-15 11:52:02.053686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.053718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.053854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.053883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.053985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.054848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.054877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.055871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.055972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.056879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.056907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.057862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.057999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.058968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.058997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.059155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.059307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.059597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.380 [2024-07-15 11:52:02.059733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.380 qpair failed and we were unable to recover it. 00:25:54.380 [2024-07-15 11:52:02.059845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.059873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.060926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.060954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.061855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.061884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.062904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.062929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.063858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.063990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.064894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.064919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.381 qpair failed and we were unable to recover it. 00:25:54.381 [2024-07-15 11:52:02.065857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.381 [2024-07-15 11:52:02.065882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.065979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.066899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.066993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.067835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.067859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.068925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.068949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.069957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.069982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.070926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.070965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.071137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.071162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.071300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.071324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.071462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.071486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.071619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.071643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.382 [2024-07-15 11:52:02.071781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.382 [2024-07-15 11:52:02.071808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.382 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.071960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.072962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.072987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.073938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.073964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.074923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.074948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.075913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.075937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.076867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.383 [2024-07-15 11:52:02.076996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.383 [2024-07-15 11:52:02.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.383 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.077969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.077994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.078972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.078997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.079965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.079989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.080959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.080983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.081902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.081926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.082922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.082947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.083051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.083075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.083171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.384 [2024-07-15 11:52:02.083195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.384 qpair failed and we were unable to recover it. 00:25:54.384 [2024-07-15 11:52:02.083371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.083409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.083575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.083771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.083797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.083928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.083966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.084140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.084163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.084338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.084360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.084535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.084558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.084748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.084772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.084923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.084947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.085159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.085310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.085461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.085592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.085814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.085999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.086023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.086174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.086197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.086310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.086344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.086568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.086591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.086815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.086840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.086970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.087182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.087336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.087502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.087676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.087839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.087864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.088934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.088959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.089140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.089300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.089475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.089647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.385 [2024-07-15 11:52:02.089807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.385 qpair failed and we were unable to recover it. 00:25:54.385 [2024-07-15 11:52:02.089922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.089947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.090818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.090843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.091071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.091094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.091235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.091258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.091418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.091455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.091688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.091711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.091870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.091901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.092047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.092091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.092262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.092285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.092446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.092479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.092658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.092681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.092884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.092912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.093109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.093132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.093274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.093297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.093463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.093501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.093701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.093746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.093888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.094017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.094041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.094229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.094252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.094411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.094442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.094624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.094651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.094801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.095012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.095036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.095170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.095193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.095415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.095438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.095607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.095630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.095807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.095830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.096051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.096074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.096251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.096274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.096463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.096490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.096661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.096684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.096881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.096907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.097042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.097067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.386 [2024-07-15 11:52:02.097191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.386 [2024-07-15 11:52:02.097229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.386 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.097362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.097401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.097533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.097557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.097732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.097763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.097880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.097905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.098133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.098157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.098295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.098318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.098505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.098528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.098676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.098699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.098883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.098908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.099167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.099191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.099356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.099380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.099549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.099572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.099720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.099751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.099883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.099907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.100160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.100186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.100368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.100392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.100605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.100628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.100764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.100792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.100924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.100948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.101210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.101233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.101412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.101435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.101658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.101681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.101827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.101853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.102951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.102996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.103167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.103191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.103397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.103420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.103596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.103619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.103761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.103800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.103924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.103948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.104097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.104121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.387 qpair failed and we were unable to recover it. 00:25:54.387 [2024-07-15 11:52:02.104339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.387 [2024-07-15 11:52:02.104363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.104483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.104507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.104654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.104678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.104885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.104910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.105043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.105277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.105325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.105448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.105484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.105671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.105708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.105874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.105899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.106045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.106273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.106310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.106429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.106466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.106650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.106687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.106917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.106942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.107087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.107124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.107290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.107326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.107502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.107539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.107701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.107789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.108059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.108097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.108303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.108340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.108480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.108518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.108638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.108675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.108857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.108886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.109899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.109924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.110040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.110064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.110275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.110312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.110469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.110505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.110627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.110664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.388 [2024-07-15 11:52:02.110853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.388 [2024-07-15 11:52:02.110878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.388 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.110998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.111040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.111240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.111277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.111414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.111451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.111630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.111794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.111819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.111979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.112155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.112368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.112537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.112759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.112972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.112997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.113152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.113191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.113316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.113355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.113584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.113624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.113809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.113834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.114961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.114986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.115121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.115144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.115325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.115361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.115514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.115555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.115712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.115757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.115947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.115971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.116117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.116170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.116356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.116395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.116539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.116584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.116828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.116853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.116986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.117009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.117184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.117223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.117410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.117448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.117657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.117696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.389 [2024-07-15 11:52:02.117857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.389 [2024-07-15 11:52:02.117883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.389 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.118115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.118154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.118307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.118346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.118487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.118527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.118688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.118727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.118923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.118947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.119054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.119093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.119271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.119310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.119470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.119509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.119637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.119676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.119869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.119896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.120071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.120110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.120264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.120458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.120497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.120669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.120722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.120901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.120926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.121092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.121131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.121329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.121381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.121725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.121808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.121958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.121997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.122138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.122178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.122306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.122351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.122570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.122609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.122833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.122881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.123072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.123111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.123263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.123302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.123480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.123524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.123730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.123778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.123943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.123982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.124194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.124233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.124422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.124460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.124616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.124657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.124797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.124838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.125003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.125044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.125236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.125287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.125415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.125456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.125655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.125696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.125894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.125934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.126088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.126126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.126301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.126339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.390 [2024-07-15 11:52:02.126527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.390 [2024-07-15 11:52:02.126566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.390 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.126719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.126771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.126936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.126976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.127135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.127176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.127348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.127389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.127529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.127569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.127774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.127798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.127932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.127971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.128228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.128252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.128378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.128402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.128554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.128578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.128712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.128740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.128837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.128860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.129073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.129114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.129269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.129310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.129483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.129525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.129713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.129779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.129950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.129993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.130135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.130175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.130398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.130442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.130607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.130648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.130780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.130828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.131002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.131049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.131191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.131232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.131392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.131432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.131628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.131676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.131864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.131905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.132119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.132160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.132460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.132484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.132657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.132699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.132855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.132880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.133063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.133316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.133362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.133554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.133607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.133813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.133840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.134005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.134054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.134201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.134242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.391 [2024-07-15 11:52:02.134386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.391 [2024-07-15 11:52:02.134427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.391 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.134590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.134631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.134800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.134826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.135008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.135056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.135270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.135315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.135493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.135534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.135765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.135949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.135975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.136095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.136135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.136315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.136356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.136513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.136553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.136727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.136777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.136941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.136987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.137193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.137234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.137446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.137490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.137651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.137693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.137871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.137917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.138081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.138122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.138276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.138316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.138479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.138520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.138639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.138680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.138927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.138969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.139168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.139212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.139381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.139423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.139638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.139685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.139940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.139982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.140190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.140233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.140402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.140446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.140615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.140658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.140892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.140941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.141152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.141396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.141439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.141585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.141628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.141831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.142105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.392 [2024-07-15 11:52:02.142159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.392 qpair failed and we were unable to recover it. 00:25:54.392 [2024-07-15 11:52:02.142346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.142612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.142655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.142926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.142955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.143102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.143130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.143320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.143348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.143529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.143565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.143721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.143756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.143930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.143958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.144063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.144091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.144224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.144252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.144452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.144495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.144711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.144763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.145012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.145058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.145214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.145257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.145387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.145431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.145591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.145635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.145843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.145883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.146044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.146087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.146258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.146304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.146519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.146563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.146706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.146757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.146914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.146942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.147100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.147144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.147394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.147436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.147657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.147706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.147945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.147973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.148144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.148187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.148422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.148467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.148800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.148830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.148979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.149014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.149427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.149731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.149919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.149948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.150163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.150230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.150435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.150480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.150724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.150803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.150959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.150988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.151187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.151231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.151427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.151673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.151716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.393 qpair failed and we were unable to recover it. 00:25:54.393 [2024-07-15 11:52:02.151940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.393 [2024-07-15 11:52:02.151977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.152174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.152217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.152477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.152520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.152700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.152765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.152919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.152947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.153116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.153162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.153360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.153405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.153616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.153662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.153856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.153885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.154043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.154088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.154281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.154328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.154553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.154598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.154816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.154845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.154992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.155021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.155195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.155241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.155429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.155483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.155713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.155806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.156045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.156099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.156274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.156321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.156646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.156698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.156935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.156973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.157139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.157185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.157511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.157556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.157728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.157806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.157956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.158217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.158263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.158455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.158501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.158657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.158703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.158925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.159119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.159172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.159420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.159466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.159616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.159663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.159858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.159897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.160048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.160093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.160323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.160369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.160511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.160574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.160767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.160823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.161009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.161062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.161238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.161284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.161464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.161517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.394 [2024-07-15 11:52:02.161755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.394 [2024-07-15 11:52:02.161813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.394 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.161935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.161963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.162152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.162197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.162394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.162440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.162608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.162653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.162840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.162869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.162996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.163024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.163253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.163303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.163591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.163637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.163871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.163910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.164055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.164119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.164347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.164393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.164533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.164578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.164761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.164930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.164959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.165167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.165377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.165426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.165602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.165647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.165827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.165856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.165976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.166005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.166142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.166187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.166495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.166541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.166714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.166775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.167012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.167068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.167317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.167362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.167542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.167588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.167789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.167818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.167930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.167958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.168114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.168160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.168331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.168386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.168530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.168580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.168759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.168823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.168981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.169009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.169220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.169267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.169429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.169474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.169648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.169698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.169906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.169934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.170103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.170149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.170395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.170443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.170755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.170808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.170975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.395 [2024-07-15 11:52:02.171003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.395 qpair failed and we were unable to recover it. 00:25:54.395 [2024-07-15 11:52:02.171219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.171268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.171498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.171547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.171798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.171828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.171960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.171989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.172160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.172208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.172508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.172556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.172805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.172855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.173046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.173097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.173220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.173269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.173450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.173506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.173698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.173765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.173955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.174004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.174249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.174298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.174545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.174594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.174759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.174808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.174975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.175024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.175212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.175265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.175493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.175542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.175785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.175843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.176064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.176107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.176298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.176357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.176588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.176637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.176840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.177125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.177174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.177374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.177423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.177649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.177699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.178022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.178071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.178340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.178390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.178710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.178776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.179008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.179057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.179255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.179304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.179489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.179538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.179735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.179809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.180039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.180088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.180246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.180295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.180451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.180500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.180662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.180711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.180952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.181002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.181169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.181218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.181460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.181509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.396 [2024-07-15 11:52:02.181708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.396 [2024-07-15 11:52:02.181771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.396 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.181923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.181972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.182289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.182339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.182523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.182573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.182735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.182805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.183037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.183086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.183337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.183387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.183563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.183620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.183840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.183899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.184070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.184120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.184346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.184395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.184607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.184656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.184901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.184956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.185173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.185222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.185399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.185447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.185633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.185690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.185867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.185917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.186074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.186124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.186352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.186401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.186551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.186601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.186778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.186828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.187006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.187054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.187202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.187254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.187435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.187485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.187679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.187728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.187934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.187983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.188160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.188209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.188378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.188426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.188605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.188673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.188833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.188890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.189035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.189092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.189230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.189279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.189456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.189505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.189701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.189759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.189936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.189985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.397 [2024-07-15 11:52:02.190153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.397 [2024-07-15 11:52:02.190201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.397 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.190372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.190421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.190564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.190613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.190763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.190812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.190996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.191045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.191211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.191260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.191425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.191474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.191654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.191704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.191905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.191955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.192185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.192234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.192508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.192557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.192770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.192961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.193248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.193316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.193525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.193573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.193754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.193805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.193980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.194034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.194248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.194297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.194522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.194572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.194754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.194804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.194960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.195009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.195182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.195241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.195448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.195497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.195679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.195728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.195925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.195975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.196148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.196196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.196368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.196416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.196635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.196684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.196891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.196941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.197092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.197141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.197300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.197524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.197573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.197755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.197805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.197956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.198017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.198196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.198245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.198416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.198465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.198612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.198661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.198853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.198904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.199117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.199167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.199438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.199486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.199635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.398 [2024-07-15 11:52:02.199684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.398 qpair failed and we were unable to recover it. 00:25:54.398 [2024-07-15 11:52:02.199950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.200001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.200188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.200239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.200391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.200441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.200590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.200639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.200827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.200876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.201049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.201099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.201250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.201300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.201463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.201513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.201697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.201762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.201926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.201975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.202140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.202190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.202389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.202621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.202673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.202848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.202898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.203079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.203128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.203303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.203352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.203527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.203576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.203775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.203825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.203968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.204015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.204319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.204369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.204523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.204572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.204753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.204803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.204962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.205011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.205186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.205234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.205414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.205463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.205630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.205679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.205859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.205909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.206046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.206096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.206240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.206289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.206467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.206519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.206713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.206771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.206952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.207001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.207175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.207232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.207419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.207469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.207641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.207689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.207891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.207941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.208116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.208165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.208308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.208357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.208536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.208586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.208827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.208877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.399 [2024-07-15 11:52:02.209046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.399 [2024-07-15 11:52:02.209096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.399 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.209262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.209311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.209484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.209533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.209685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.209734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.209929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.209979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.210157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.210208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.210386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.210436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.210586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.210646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.210831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.211060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.211109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.211294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.211343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.211526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.211575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.211731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.211808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.212008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.212056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.212214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.212266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.212460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.212710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.212770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.212949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.212998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.213178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.213228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.213437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.213486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.213634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.213683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.213889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.213939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.214111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.214160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.214333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.214382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.214554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.214603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.214789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.214843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.214993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.215042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.215229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.215277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.215433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.215631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.215680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.215835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.216057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.216107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.216253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.216309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.216455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.216504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.216704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.216764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.216939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.216988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.217165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.217214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.217408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.217457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.217604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.217652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.217843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.217894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.218042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.400 qpair failed and we were unable to recover it. 00:25:54.400 [2024-07-15 11:52:02.218301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.400 [2024-07-15 11:52:02.218349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.218496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.218545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.218719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.218781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.218958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.219007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.219205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.219254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.219433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.219483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.219658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.219706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.219940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.219991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.220175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.220225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.220420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.220469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.220662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.220711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.220904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.220953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.221128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.221177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.221331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.221379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.221549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.221598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.221763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.221814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.221983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.222032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.222183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.222232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.222418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.222468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.222636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.222684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.222907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.222957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.223102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.223151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.223295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.223350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.223544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.223592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.223767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.223818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.223997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.224046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.224225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.224274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.224453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.224502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.224664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.224690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.224883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.224933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.225134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.225182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.225333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.225390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.225572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.225621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.225823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.225873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.226037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.401 [2024-07-15 11:52:02.226086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.401 qpair failed and we were unable to recover it. 00:25:54.401 [2024-07-15 11:52:02.226263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.226312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.226485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.226539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.226774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.226824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.227070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.227119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.227292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.227341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.227526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.227574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.227760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.227810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.228019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.228069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.228272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.228321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.228477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.228527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.228735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.228795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.228975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.229024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.229199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.229258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.229439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.229487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.229636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.229685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.229874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.229925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.230097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.230147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.230347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.230396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.230565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.230614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.230759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.230807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.230980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.231030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.231203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.231251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.231401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.231450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.231636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.231686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.231869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.231919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.232072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.232121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.232291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.232341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.232540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.232588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.232767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.232824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.233037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.233086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.233268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.233317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.233495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.233552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.233763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.233813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.233983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.234032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.234221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.234270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.234433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.234481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.234660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.234717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.234987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.235036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.235245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.235294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.235471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.402 [2024-07-15 11:52:02.235524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.402 qpair failed and we were unable to recover it. 00:25:54.402 [2024-07-15 11:52:02.235712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.235772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.235943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.235992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.236174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.236222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.236394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.236443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.236618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.236667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.236874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.236924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.237130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.237179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.237340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.237389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.237575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.237623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.237871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.237921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.238159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.238209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.238407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.238456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.238662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.238713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.239033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.239100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.239279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.239351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.239546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.239594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.239767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.239817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.240025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.240074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.240289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.240337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.240520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.240571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.240776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.241025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.241074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.241282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.241331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.241535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.241770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.241819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.242010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.242060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.242267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.242315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.242526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.242575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.242774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.242824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.243048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.243097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.243305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.243355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.243568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.243617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.243797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.243856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.244038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.244087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.244289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.244341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.244514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.244563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.244764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.244824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.245129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.245178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.245356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.245405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.403 qpair failed and we were unable to recover it. 00:25:54.403 [2024-07-15 11:52:02.245699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.403 [2024-07-15 11:52:02.245757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.245923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.245983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.246246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.246312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.246610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.246659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.246959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.247026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.247264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.247313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.247573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.247803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.247861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.248153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.248202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.248442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.248491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.248719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.248995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.249061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.249354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.249419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.249661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.249710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.249935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.250002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.250274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.250324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.250502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.250554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.250709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.250783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.251078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.251126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.251295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.251361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.251590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.251638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.251885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.251954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.252107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.252174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.252421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.252487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.252634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.252691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.252897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.252965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.253232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.253298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.253478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.253526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.253789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.253953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.254020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.254206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.254255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.254430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.254488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.254665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.254715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.254907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.254956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.255184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.255233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.255444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.255493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.404 [2024-07-15 11:52:02.255720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.404 [2024-07-15 11:52:02.255786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.404 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.255975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.256023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.256258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.256307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.256525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.256592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.256835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.256903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.257075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.257144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.257371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.257420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.257692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.257751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.257905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.257983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.258189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.258262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.258503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.258552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.258735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.258810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.258992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.259057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.259312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.259378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.259616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.259665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.259875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.259951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.260125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.260191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.260425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.260491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.260706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.260767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.260985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.261052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.261229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.261295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.261506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.261574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.261789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.261840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.262031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.262084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.262288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.262355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.262611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.262660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.262958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.263034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.263296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.263361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.263648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.263705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.263954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.264022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.264272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.264338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.264635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.264684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.264937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.265004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.405 qpair failed and we were unable to recover it. 00:25:54.405 [2024-07-15 11:52:02.265295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.405 [2024-07-15 11:52:02.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.265629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.265678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.265937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.266005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.266235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.266303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.266526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.266596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.266867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.266934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.267167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.267234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.267480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.267546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.267840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.267908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.268212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.268278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.268565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.268632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.268875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.268942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.269176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.269244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.269569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.269635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.269858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.269926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.270159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.270225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.270578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.270627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.270887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.270955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.271129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.271196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.271444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.271511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.271805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.271856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.272105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.272172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.272538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.272607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.272860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.272910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.273182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.273249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.273475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.273543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.273831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.273897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.274119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.274185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.274419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.274486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.274728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.274804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.275008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.275075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.275312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.275378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.275609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.275658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.275865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.275934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.276193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.276258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.276554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.276627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.276875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.276943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-07-15 11:52:02.277200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-07-15 11:52:02.277267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.277517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.277584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.277849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.277918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.278137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.278209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.278503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.278570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.278809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.278879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.279143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.279210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.279453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.279518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.279815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.279865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.280085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.280152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.280414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.280480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.280772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.280823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.281104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.281172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.281431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.281497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.281770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.281821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.282031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.282100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.282390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.282458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.282718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.282793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.283070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.283119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.283414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.283480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.283706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.283767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.283924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.283973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.284232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.284300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.284593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.284659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.285035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.285087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.285395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.285464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.285764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.285814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.286108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.286157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.286417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.286484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.286784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.286834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.287134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.287463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.287529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.287734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.287795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.288031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.290951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.291030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.291341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.291415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.291698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.291759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.292068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-07-15 11:52:02.292118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-07-15 11:52:02.292429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.292488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.292691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.292751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.293038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.293088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.293437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.293504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.293802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.293853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.294060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-07-15 11:52:02.294110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-07-15 11:52:02.294414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.294490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.294758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.294809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.295107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.295135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.295321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.295348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.295565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.295632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.295916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.295968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.296251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.296320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.296534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.296602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.296843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.296894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.297100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.297168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.297490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.297557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.297818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.297887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.298137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.298206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.298492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.298915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.298993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.694 [2024-07-15 11:52:02.299209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.694 [2024-07-15 11:52:02.299243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.694 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.299470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.299504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.299757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.299792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.299968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.300002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.300226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.300260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.300408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.300443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.300590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.300624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.300896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.300931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.301143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.301177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.301421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.301455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.301661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.301694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.301975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.302011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.302200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.302234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.302386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.302426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.302694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.302727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.302999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.303033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.303281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.303315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.303523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.303556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.303816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.303851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.304106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.304145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.304311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.304345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.304551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.304586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.304810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.304845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.305097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.305130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.305340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.305374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.305562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.305595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.305745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.305780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.305935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.305979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.306191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.306225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.306411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.306445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.306656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.306690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.306960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.307006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.307255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.307289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.307501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.307535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.307726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.307934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.307968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.308177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.308211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.308357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.308391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.308643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.308676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.308819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.308854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.309102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.695 [2024-07-15 11:52:02.309136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.695 qpair failed and we were unable to recover it. 00:25:54.695 [2024-07-15 11:52:02.309363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.309397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.309556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.309590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.309781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.309816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.310026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.310059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.310359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.310530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.310564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.310816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.310852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.311130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.311163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.311369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.311402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.311652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.311685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.311897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.311932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.312131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.312172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.312384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.312417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.312626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.312659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.312848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.312883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.313097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.313131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.313383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.313416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.313598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.313631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.313762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.313802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.314028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.314062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.314274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.314308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.314507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.314540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.314770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.315060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.315094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.315299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.315584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.315617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.315764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.315799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.316046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.316080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.316332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.316365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.316561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.316594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.316840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.316874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.317067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.317100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.317361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.317395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.317619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.317653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.317906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.317941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.318156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.318190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.318435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.318469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.318724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.318766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.318960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.318994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.319207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.319241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.696 [2024-07-15 11:52:02.319458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.696 [2024-07-15 11:52:02.319492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.696 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.319656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.319689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.319952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.319986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.320164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.320198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.320448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.320481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.320746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.320781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.320991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.321025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.321240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.321273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.321514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.321547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.321802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.321837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.322093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.322127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.322302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.322336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.322547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.322581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.322791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.322826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.323010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.323043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.323212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.323246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.323493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.323526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.323783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.324071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.324110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.324352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.324385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.324570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.324604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.324811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.324845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.325099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.325132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.325369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.325403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.325570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.325823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.325857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.326074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.326108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.326351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.326385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.326574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.326608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.326824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.326858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.327109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.327143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.327397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.327431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.327691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.327724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.327942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.327976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.328196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.328230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.328392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.328426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.328679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.328712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.328982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.329017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.329183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.329217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.329417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.329451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.329646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.329680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.697 qpair failed and we were unable to recover it. 00:25:54.697 [2024-07-15 11:52:02.329940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.697 [2024-07-15 11:52:02.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.330226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.330260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.330508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.330541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.330734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.330787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.331003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.331037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.331235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.331269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.331484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.331517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.331840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.332121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.332188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.332493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.332561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.332853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.332904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.333198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.333265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.333505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.333572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.333811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.333882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.334151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.334217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.334512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.334578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.334848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.334916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.335222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.335297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.335615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.335682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.335944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.336013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.336263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.336331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.336623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.336691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.337004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.337072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.337362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.337431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.337622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.337672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.337869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.337937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.338231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.338298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.338589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.338657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.338911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.338979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.339229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.339296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.339569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.339637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.339937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.340008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.698 qpair failed and we were unable to recover it. 00:25:54.698 [2024-07-15 11:52:02.340308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.698 [2024-07-15 11:52:02.340374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.340610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.340659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.340918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.340987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.341272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.341339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.341638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.341704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.342018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.342091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.342343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.342411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.342674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.342723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.343032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.343103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.343360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.343427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.343721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.343781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.344087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.344155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.344448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.344515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.344765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.344815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.345098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.345147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.345406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.345473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.345710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.345770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.346062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.346112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.346364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.346431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.346674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.347044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.347093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.347305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.347373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.347599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.347649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.347929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.348000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.348292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.348360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.348562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.348636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.348922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.348991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.349197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.349266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.349555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.349622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.349900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.349968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.350240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.350307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.350602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.350667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.350974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.351043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.351329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.351396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.351639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.351688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.352006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.352077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.352372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.352437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.352749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.352800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.353005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.353375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.353442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.353705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.699 [2024-07-15 11:52:02.353765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.699 qpair failed and we were unable to recover it. 00:25:54.699 [2024-07-15 11:52:02.354065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.354114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.354411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.354478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.354723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.354801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.355094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.355142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.355392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.355460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.355759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.355809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.356057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.356107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.356316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.356385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.356654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.356720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.357027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.357076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.357291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.357359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.357670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.357752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.358041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.358091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.358342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.358410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.358702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.358765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.359026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.359075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.359368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.359435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.359635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.359684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.359990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.360040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.360328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.360395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.360676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.360724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.361032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.361082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.361325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.361393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.361648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.361697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.361998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.362058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.362349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.362415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.362694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.362766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.363007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.363057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.363343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.363410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.363625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.363674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.363982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.364033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.364260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.364327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.364622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.364690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.365000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.365050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.365332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.365399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.365689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.365750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.366051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.366100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.366398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.366467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.366766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.366817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.367058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.367107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.700 qpair failed and we were unable to recover it. 00:25:54.700 [2024-07-15 11:52:02.367392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.700 [2024-07-15 11:52:02.367459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.367697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.367757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.368043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.368093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.368351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.368417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.368713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.368774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.369075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.369124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.369385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.369451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.369671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.369720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.370017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.370085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.370370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.370438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.370711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.371105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.371156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.371412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.371478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.371719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.371782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.372069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.372135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.372380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.372446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.372745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.373085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.373151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.373436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.373502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.373804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.373877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.374165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.374214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.374401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.374766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.374817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.375077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.375125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.375422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.375497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.375800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.375850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.376115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.376165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.376455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.376524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.376821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.376872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.377162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.377230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.377518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.377584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.377881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.377932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.378148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.378215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.378500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.378566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.378850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.378900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.379085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.379158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.379387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.379456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.379656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.379706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.379915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.379966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.380136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.701 [2024-07-15 11:52:02.380204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.701 qpair failed and we were unable to recover it. 00:25:54.701 [2024-07-15 11:52:02.380443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.380493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.380711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.380774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.380960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.381029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.381276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.381330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.381628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.381962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.382031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.382340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.382406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.382696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.382756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.382962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.383029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.383331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.383398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.383679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.383729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.383973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.384385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.384456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.384701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.384763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.384986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.385035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.385309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.385376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.385615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.385682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.385881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.385932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.386176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.386243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.386499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.386566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.386787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.386838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.387122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.387190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.387450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.387516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.387763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.388028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.388086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.388405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.388472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.388746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.388797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.388989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.389038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.389257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.389618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.389685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.389918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.389969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.390211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.390278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.390519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.390587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.390833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.390884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.391079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.391146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.391438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.391505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.391732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.391792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.391979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.392046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.392307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.392373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.392586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.392636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.392880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.702 [2024-07-15 11:52:02.392948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.702 qpair failed and we were unable to recover it. 00:25:54.702 [2024-07-15 11:52:02.393222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.393272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.393546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.393613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.393836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.393904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.394110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.394177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.394478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.394545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.394850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.394926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.395137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.395204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.395440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.395507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.395782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.395833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.396055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.396104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.396330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.396399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.396591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.396640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.396842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.396910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.397191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.397242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.397457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.397524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.397708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.397770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.397997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.398064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.398271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.398338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.398519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.398568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.398762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.399031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.399080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.399342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.399409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.399683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.399732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.399949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.400016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.400253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.400319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.400568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.400617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.400855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.400923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.401217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.401283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.401566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.401633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.401836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.401904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.402101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.402168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.402438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.402505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.402804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.402855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.403096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.403163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.703 [2024-07-15 11:52:02.403434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.703 [2024-07-15 11:52:02.403502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.703 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.403769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.403819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.404079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.404146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.404416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.404482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.404759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.404809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.405053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.405102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.405352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.405420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.405684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.405734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.406008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.406057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.406257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.406324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.406555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.406622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.406813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.406863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.407096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.407163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.407423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.407491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.407720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.407782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.407974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.408040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.408233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.408308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.408552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.408619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.408829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.408897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.409172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.409240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.409518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.409583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.409832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.409901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.410127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.410195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.410458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.410526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.410797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.410848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.411094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.411163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.411438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.411506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.411764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.411814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.412056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.412124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.412391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.412457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.412723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.412783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.413047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.413288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.413356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.413587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.413654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.413948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.413998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.414277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.414344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.414583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.414649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.414874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.414942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.415187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.415255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.415494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.415561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.415839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.415906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.704 [2024-07-15 11:52:02.416148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.704 [2024-07-15 11:52:02.416214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.704 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.416448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.416516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.416749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.416799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.417050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.417382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.417450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.417673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.417722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.418015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.418082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.418347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.418415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.418639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.418688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.418925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.418992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.419215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.419281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.419544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.419612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.419846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.419917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.420146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.420213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.420463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.420531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.420798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.420856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.421119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.421464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.421531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.421790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.421841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.422114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.422182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.422453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.422520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.422792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.422842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.423129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.423194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.423458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.423525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.423790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.423840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.424082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.424148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.424419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.424486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.424759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.425077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.425126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.425343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.425410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.425684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.425734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.426022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.426070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.426311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.426377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.426662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.426729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.427005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.427055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.427343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.427410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.427699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.427764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.428073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.428122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.428398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.428464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.428716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.428779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.429077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.705 [2024-07-15 11:52:02.429126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.705 qpair failed and we were unable to recover it. 00:25:54.705 [2024-07-15 11:52:02.429440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.429506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.429818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.429869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.430127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.430194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.430492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.430559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.430805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.430856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.431151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.431219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.431500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.431567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.431855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.431904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.432164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.432230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.432513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.432581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.432868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.432937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.433224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.433291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.433540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.433608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.433900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.433968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.434265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.434339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.434633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.434682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.435007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.435076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.435338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.435406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.435644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.435692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.436003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.436072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.436360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.436427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.436695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.436754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.437051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.437117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.437418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.437489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.437777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.437829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.438091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.438356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.438424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.438627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.438676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.438962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.439013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.439324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.439615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.439664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.439927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.439978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.440274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.440353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.440631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.440699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.441021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.441090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.441390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.441457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.441754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.442096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.442145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.442316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.442381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.442658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.442707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.706 [2024-07-15 11:52:02.443031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.706 [2024-07-15 11:52:02.443081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.706 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.443318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.443386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.443666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.443715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.444033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.444082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.444302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.444369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.444662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.444728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.444997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.445046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.445306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.445373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.445625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.445675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.445983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.446034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.446336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.446403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.446663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.446712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.446962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.447012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.447312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.447379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.447666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.447723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.448036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.448086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.448357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.448424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.448670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.448718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.449014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.449064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.449326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.449394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.449682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.449731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.449991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.450040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.450291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.450356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.450651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.451015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.451065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.451372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.451439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.451685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.451734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.451965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.452015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.452256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.452324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.452620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.452688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.452956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.453006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.453231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.453296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.453548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.453615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.453943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.454012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.454309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.454376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.707 [2024-07-15 11:52:02.454598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.707 [2024-07-15 11:52:02.454647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.707 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.454917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.454986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.455262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.455329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.455572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.455640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.455897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.455968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.456267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.456335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.456598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.456647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.456893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.456960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.457205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.457272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.457562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.457630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.457933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.458001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.458261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.458329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.458591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.458657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.458908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.458976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.459271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.459339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.459602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.459652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.459911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.459979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.460264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.460331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.460577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.460643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.460945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.461022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.461320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.461394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.461691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.461754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.462013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.462083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.462342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.462409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.462659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.462708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.462972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.463040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.463296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.463362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.463649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.463716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.463992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.464059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.464360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.464428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.464720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.464785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.465083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.465132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.465401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.465469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.465713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.465778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.465989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.466061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.466321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.466389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.466641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.466713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.466990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.467040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.467341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.467408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.467665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.467715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.468025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.708 [2024-07-15 11:52:02.468076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.708 qpair failed and we were unable to recover it. 00:25:54.708 [2024-07-15 11:52:02.468378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.468444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.468753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.468804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.469050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.469100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.469329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.469396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.469684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.469768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.470038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.470088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.470328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.470394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.470654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.470723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.471051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.471101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.471398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.471465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.471724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.471789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.472086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.472135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.472385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.472452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.472713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.472778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.473051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.473410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.473721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.473786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.474103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.474178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.474470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.474546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.474823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.474874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.475092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.475157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.475416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.475484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.475775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.475825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.476072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.476140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.476430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.476498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.476807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.476857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.477160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.477227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.477555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.477822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.477872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.478137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.478203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.478540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.478607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.478911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.478961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.479261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.479328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.479545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.479611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.479892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.479960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.480284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.480352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.480664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.480712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.481015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.481082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.481386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.481453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.481758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.481808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.482096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-07-15 11:52:02.482145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.709 qpair failed and we were unable to recover it. 00:25:54.709 [2024-07-15 11:52:02.482423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.482491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.482779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.482831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.483119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.483169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.483434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.483502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.483765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.483816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.484060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.484110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.484338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.484406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.484658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.484726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.485009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.485060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.485354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.485422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.485726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.485792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.486088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.486139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.486441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.486509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.486806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.486858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.487145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.487195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.487445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.487801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.487853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.488114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.488192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.488450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.488521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.488775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.488826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.489119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.489170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.489477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.489544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.489840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.489891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.490157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.490225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.490560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.490793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.490845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.491144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.491214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.491505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.491573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.491832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.492201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.492270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.492529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.492597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.492876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.492945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.493248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.493315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.493611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.493679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.493989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.494057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.494358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.494434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.494662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.494711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.494977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.495046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.495334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.495401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.495690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.495754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.496026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-07-15 11:52:02.496100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.710 qpair failed and we were unable to recover it. 00:25:54.710 [2024-07-15 11:52:02.496392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.496765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.496816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.497070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.497137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.497381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.497452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.497685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.497733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.498024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.498074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.498372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.498441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.498727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.498799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.499088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.499137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.499403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.499469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.499673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.499722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.500036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.500109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.500430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.500483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.500766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.500817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.501023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.501074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.501367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.501434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.501761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.501837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.502099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.502150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.502475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.502542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.502825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.502875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.503129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.503170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.503401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.503443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.503764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.503832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.504134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.504185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.504457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.504822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.504873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.505154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.505204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.505463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.505530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.505788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.505838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.506130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.506179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.506484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.506550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.506834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.506884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.507181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.507249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.507510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.711 [2024-07-15 11:52:02.507577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.711 qpair failed and we were unable to recover it. 00:25:54.711 [2024-07-15 11:52:02.507839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.507890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.508146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.508213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.508455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.508524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.508692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.508764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.509032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.509099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.509371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.509438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.509689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.509757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.510035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.510110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.510421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.510490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.510803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.511165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.511231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.511496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.511564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.511766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.511817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.512074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.512141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.512399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.512466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.512754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.512804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.513066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.513132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.513393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.513461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.513646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.513695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.513952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.514003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.514296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.514370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.514648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.514721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.515067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.515145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.515409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.515478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.515794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.515845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.516120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.516187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.516478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.516544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.516832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.516884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.517143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.517211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.517473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.517540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.517823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.517872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.518190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.518267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.518573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.518905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.518956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.519281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.519610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.519677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.519935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.519984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.520276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.520646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.520712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.521032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.521106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.712 [2024-07-15 11:52:02.521402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.712 [2024-07-15 11:52:02.521469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.712 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.521778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.521828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.522129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.522197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.522459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.522526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.522822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.522872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.523155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.523222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.523535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.523602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.523826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.523878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.524147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.524213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.524511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.524578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.524863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.525175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.525242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.525528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.525595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.525826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.525895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.526200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.526269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.526556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.526623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.526936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.527004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.527193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.527261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.527515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.527583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.527834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.527901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.528178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.528246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.528495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.528563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.528861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.528938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.529199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.529267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.529554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.529621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.529857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.529926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.530210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.530276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.530555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.530622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.530930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.531001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.531265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.531332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.531534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.531582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.531862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.531930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.532190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.532258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.532547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.532615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.532906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.532975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.533243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.533311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.533595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.533643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.533900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.533969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.534216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.534284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.534555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.534623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.534952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.535021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.713 qpair failed and we were unable to recover it. 00:25:54.713 [2024-07-15 11:52:02.535293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.713 [2024-07-15 11:52:02.535362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.535594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.535644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.535861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.535930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.536226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.536293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.536610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.536678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.536992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.537061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.537312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.537380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.537676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.537726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.538019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.538092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.538389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.538457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.538774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.538825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.539101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.539170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.539423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.539489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.539780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.539831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.540124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.540173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.540474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.540541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.540836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.540887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.541139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.541207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.541482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.541551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.541774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.541843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.542141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.542207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.542411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.542484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.542727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.542807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.543119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.543187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.543450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.543518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.543803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.543855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.544150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.544218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.544514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.544582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.544861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.544912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.545161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.545227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.545528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.545597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.545907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.545976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.546283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.546351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.546620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.546688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.547019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.547389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.547457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.547758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.547810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.548076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.548126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.548357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.548423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.548674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.548724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.714 [2024-07-15 11:52:02.549027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.714 [2024-07-15 11:52:02.549076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.714 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.549329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.549397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.549612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.549661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.549961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.550012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.550278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.550615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.550682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.550988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.551058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.551331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.551398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.551656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.551706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.551968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.552036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.552262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.552329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.552594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.552661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.552900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.552970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.553269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.553336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.553609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.553675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.553992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.554061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.554372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.554439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.554698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.554770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.555066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.555116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.555418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.555486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.555726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.555791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.556080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.556137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.556433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.556501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.556760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.556810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.557102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.557151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.557432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.557500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.557718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.558100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.558149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.558446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.558514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.558804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.558855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.559141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.559190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.559465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.559532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.559830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.559881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.560133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.560201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.560504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.560829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.560880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.561183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.561550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.561617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.561903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.561953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.562216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.562284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.562558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.715 [2024-07-15 11:52:02.562626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.715 qpair failed and we were unable to recover it. 00:25:54.715 [2024-07-15 11:52:02.562868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.562937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.563225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.563588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.563654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.563929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.563998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.564229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.564296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.564592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.564659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.564932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.565001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.565318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.565385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.565638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.565928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.565996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.566259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.566331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.566575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.566642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.566942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.567010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.567265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.567332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.567625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.567692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.567913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.567983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.568272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.568339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.568626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.568693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.568997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.569065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.569334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.569401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.569685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.569734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.570053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.570420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.570486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.570781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.570832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.571095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.571164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.571420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.571488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.571767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.571818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.572101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.572149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.572436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.572503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.572752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.572802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.573055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.573105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.573406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.573473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.573703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.573764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.574056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.574105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.574370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-07-15 11:52:02.574438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.716 qpair failed and we were unable to recover it. 00:25:54.716 [2024-07-15 11:52:02.574709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.574774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.575058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.575108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.575415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.575482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.575785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.575837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.576126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.576175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.576456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.576524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.576774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.576823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.577071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.577120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.577369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.577436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.577689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.577749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.577968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.578017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.578261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.578329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.578619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.578695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.578968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.579019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.579315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.579625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.579674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.579972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.580023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.580324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.580391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.580646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.580695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.580953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.581004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.581279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.581346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.581563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.581631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.581874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.581941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.582236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.582302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.582600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.582667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.582932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.583001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.583274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.583341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.583610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.583677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.583947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.584016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.584316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.584612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.584661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.584929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.584998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.585307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.585375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.585588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.585637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.585893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.585962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.586261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.586330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.586626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.586693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.586967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.587035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.587306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.587373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.587672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.587721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.587990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.717 [2024-07-15 11:52:02.588065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.717 qpair failed and we were unable to recover it. 00:25:54.717 [2024-07-15 11:52:02.588350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.588400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.588640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.588688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.588951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.589019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.589335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.589403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.589612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.589660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.589906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.589975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.590274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.590340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.590633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.590700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.591008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.591075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.591383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.591450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.591749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.591800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.592071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.592128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.592430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.592496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.592779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.592829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.593101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.593150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.593451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.593517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.593706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.593770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.594035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.594084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.594374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.594440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.594691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.594768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.595060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.595110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.595359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.595426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.595720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.595787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.596067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.596118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.596380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.596446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.596761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.596812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.597047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.597097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.597398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.597465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.597713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.597777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.597983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.598032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.598296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.598364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.598669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.598750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.599021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.599332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.599657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.599724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.600030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.600079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.600380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.600447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.600726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.600791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.601025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.601074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.601326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.601395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.601693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.718 [2024-07-15 11:52:02.601776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.718 qpair failed and we were unable to recover it. 00:25:54.718 [2024-07-15 11:52:02.602071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.602120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.602398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.602466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.602769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.602819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.603113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.603162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.603422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.603489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.603753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.603803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.604056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.604106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.604371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.604440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.604718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.604782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.605039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.605088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.605319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.605393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.605689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.605770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.606025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.606075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.606344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.606410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.606707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.606790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.607087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.607137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.607359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.607426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.607784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.608033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.608083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.608379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.608446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.608666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.608715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.609023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.609074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.609326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.609392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.609685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.609769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.610038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.610087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.610400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.610468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.610717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.610796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.611097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.611147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.611445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.611511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.611803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.611855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.612141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.612191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.612416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.612487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.612769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.612820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.613111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.613178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.613427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.613495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.613755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.614055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.614104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.614372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.614439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.614701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.614762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.615045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.615094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.719 [2024-07-15 11:52:02.615390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.719 [2024-07-15 11:52:02.615456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.719 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.615748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.615799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.616092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.616141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.616443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.616511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.616811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.616862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.617115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.617164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.617411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.617478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.617777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.617827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.618123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.618173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.618420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.618487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.618790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.618848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.619105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.619173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.619424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.619491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.619783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.619834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.620083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.620152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.620423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.620489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.620771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.620820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.621107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.621176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.621463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.621530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.621821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.621871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.622118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.622490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.622556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.622780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.622830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.623135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.623202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.623520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.623587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.623875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.623925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.624215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.624282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.624533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.624600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.624884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.624935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.625227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.625293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.625592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.625659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.625932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.626000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.626303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.626371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.626659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.626708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.627014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.627090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.720 [2024-07-15 11:52:02.627344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.720 [2024-07-15 11:52:02.627412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.720 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.627642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.627691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.627975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.628045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.628300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.628366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.628671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.628720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.629005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.629075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.629373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.629439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.629735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.629798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.630056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.630124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.630384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.630451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.630689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.630751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.631048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.631116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.631416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.631483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.631793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.631868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.632160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.632228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.632527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.632606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.632846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.632897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.633193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.633261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.633515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.633582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.633835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.633887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.634197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.634263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.634513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.634579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.634872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.634942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.635186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.635535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.635602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.635902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.635970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.636269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.636337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.636590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.636640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.636935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.637002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.637305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.637372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.637648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.637698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.638013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.638079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.638385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.638451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.638756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.638808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.639053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.639393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.639460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.639756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.639807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.640049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.640098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.640365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.640431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.640683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.721 qpair failed and we were unable to recover it. 00:25:54.721 [2024-07-15 11:52:02.641043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.721 [2024-07-15 11:52:02.641093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.641338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.641405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.641661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.641710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.641979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.642028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.642314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.642382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.642678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.642771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.643075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.643124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.643389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.643455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.643705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.643769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.644059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.644108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.644356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.644423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.644709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.644772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.645060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.645109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.645410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.645477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.645765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.645816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.646112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.646170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.646470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.646538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.646827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.646877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.647168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.647217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.647525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.647590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.647886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.647936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.648184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.648252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.648503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.648571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.648864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.648913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.649210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.649279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.649579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.649649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.649931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.649981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.650270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.650338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.650585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.650653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.650987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.651056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.651354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.651422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.651703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.651761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.652020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.652093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.652387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.652455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.652756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.652807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.653065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.653401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.653468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.653709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.653772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.654078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.654153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.654451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:54.722 [2024-07-15 11:52:02.654781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.722 [2024-07-15 11:52:02.654832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:54.722 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.655129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.655179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.655448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.655517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.655763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.655814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.656106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.656156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.656455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.656523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.656843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.656892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.657187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.657237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.657525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.657594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.657893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.657943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.658251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.658334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.658649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.658723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.659035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.659085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.659390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.659457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.659754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.659803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.660112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.660167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.660468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.660534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.660838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.660886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.661081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.661130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.661381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.661449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.661759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.661834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.662074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.662123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.662423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.662490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.662806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.662853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.663115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.663166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.663425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.663475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.663794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.663842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.664131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.664180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.664444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.664511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.664823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.664870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.665114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.665163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.665426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.665493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.665798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.665845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.666095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.666144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.666442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.666515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-07-15 11:52:02.666802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-07-15 11:52:02.666849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.667137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.667205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.667504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.667572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.667864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.667911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.668155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.668223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.668462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.668530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.668778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.668842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.669131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.669199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.669507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.669572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.669823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.669871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.670121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.670189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.670478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.670543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.670838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.670886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.671130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.671197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.671443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.671511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.671790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.672117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.672184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.672452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.672520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.672825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.672872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.673158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.673225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.673485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.673559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.673843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.673890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.674146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.674213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.674475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.674542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.674805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.674852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.675146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.675214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.675439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.675505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.675722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.675799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.676004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.676068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.676354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.676420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.676675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.676724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.676997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.677062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.677357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.677424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.677669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.677718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.678058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.678108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.678375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-07-15 11:52:02.678442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-07-15 11:52:02.678689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.678753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.679056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.679102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.679394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.679460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.679700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.679782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.680028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.680092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.680355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.680422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.680705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.680796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.681066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.681137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.681415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.681482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.681809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.681857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.682121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.682170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.682400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.682466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.682708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.682769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.683057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.683106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.683362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.683430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.683721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.683797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.684057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.684106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.684360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.684429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.684671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.684720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.684952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.684998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.685244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.685311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.685595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.685662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.685947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.685993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.686265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.686330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.686596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.686668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.686994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.687059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.687322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.687389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.687674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.687723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.688061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.688111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.688402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.688469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.688750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.688815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.689108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.689176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.689465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.689532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.689805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.689853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.690062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.690111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.690412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-07-15 11:52:02.690479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-07-15 11:52:02.690763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.690826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.691116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.691165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.691431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.691497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.691787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.692080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.692129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.692335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.692404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.692681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.692729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.692966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.693012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.693300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.693367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.693660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.693728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.694019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.694365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.694434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.694715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.694794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.694997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.695060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.695333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.695399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.695704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.695781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.696072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.696121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.696423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.696492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.696751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.696816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.697064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.697416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.697482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.697790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.697838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.698083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.698133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.698388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.698455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.698795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.698842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.699098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.699147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.699449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.699514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.699803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.699850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.700066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.700129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.700353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.700420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.700690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.700752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.701077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.701127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.701374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.701440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.701695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.701756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.702045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-07-15 11:52:02.702095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-07-15 11:52:02.702347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.702415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.702673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.702722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.703009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.703073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.703366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.703434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.703673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.703722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.704046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.704095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.704340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.704408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.704703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.704765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.705082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.705132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.705438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.705505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.705808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.706113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.706189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.706486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.706557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.706792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.706839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.707104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.707171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.707431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.707500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.707764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.707814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.708009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.708059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.708354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.708423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.708706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.708777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.709037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.709087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.709378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.709445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.709695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.709755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.709950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.709999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.710215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.710282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.710530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.710598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.710846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.710896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-07-15 11:52:02.711130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-07-15 11:52:02.711198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.711449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.711519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.711761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.711811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.712040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.712110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.712361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.712430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.712680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.712728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.712953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.713011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.713291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.713358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.713607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.713673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.713923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.713974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.714189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.714255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.714538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.714605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.714831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.714902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.715138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.715206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.715460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.715528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.715802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.715852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.716140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.716206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.716497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.716566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.716762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.716812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.717065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.717133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.717395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.717461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.717717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.717779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.718034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.718105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.718307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.718374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.718653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.718721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.718988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.719057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.719297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.719363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.719644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.719712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.719996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.720066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.720339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.720406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.720676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.720725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.720963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.721012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.721247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.721316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.721558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.721626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-07-15 11:52:02.721828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-07-15 11:52:02.721896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.722189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.722257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.722496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.722565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.722830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.722899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.723147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.723215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.723508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.723574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.723856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.723924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.724168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.724238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.724473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.724540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.724816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.725105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.725172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.725386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.725457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.725754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.725821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.726070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.726138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.726430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.726497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.726792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.726843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.727119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.727186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.727470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.727537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.727762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.727813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.728043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.728093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.728332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.728399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.728646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.728713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.728905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.728955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.729205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.729272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.729566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.729635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.729933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.729983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.730277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.730345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.730643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.730713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.730969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.731018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.731297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.731364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.731649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.731716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.731968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.732035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.732338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.732413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.732687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.732736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.732999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.733048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-07-15 11:52:02.733338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-07-15 11:52:02.733404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.733606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.733654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.733879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.733930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.734169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.734236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.734487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.734554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.734834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.734905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.735208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.735276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.735587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.735655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.735931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.736000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.736290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.736358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.736586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.736635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.736849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.736919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.737161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.737229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.737471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.737778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.737828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.738071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.738138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.738421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.738487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.738757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.738807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.739098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.739147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.739389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.739456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.739678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.739727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.740025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.740074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.740345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.740421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.740610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.740658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.740936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.740986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.741267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.741335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.741616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.741683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.741969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.742037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.742308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.742375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.742603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.742941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.743009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.743265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.743603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.743670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.743941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.744010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.744223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.744291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-07-15 11:52:02.744576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-07-15 11:52:02.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.744947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.745016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.745273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.745339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.745612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.745682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.745971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.746038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.746314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.746381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.746615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.746663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.746983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.747052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.747306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.747373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.747585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.747642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.747952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.748020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.748270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.748336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.748634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.748702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.748995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.749064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.749305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.749371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.749642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.749690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.749957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.750025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.750299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.750365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.750609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.750657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.750954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.751023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.751346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.751414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.751671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.751720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.751943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.752011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.752312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.752379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.752665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.753042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.753091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.753346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.753412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.753704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.753767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.754021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.754069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.754351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.754417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.754707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.754779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.755082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.755387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.755456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.755762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.755813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.756106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-07-15 11:52:02.756155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-07-15 11:52:02.756449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.756516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.756823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.756874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.757171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.757220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.757515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.757582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.757787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.757837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.758101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.758167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.758465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.758532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.758781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.758831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.759128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.759195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.759473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.759541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.759825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.759874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.760074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.760141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.760438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.760506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.760790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.760840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.761067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.761142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.761389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.761455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.761762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.761812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.762096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.762145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.762386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.762453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.762780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.762830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.763129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.763447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.763514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.763767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.763818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.764085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.764134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.764427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.764494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.764760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.764810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.765029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.765078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.765293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.765359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.765658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.765726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.766016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.766067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-07-15 11:52:02.766323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-07-15 11:52:02.766390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.766648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.766714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.766961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.767011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.767315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.767381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.767689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.767771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.768068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.768117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.768416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.768482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.768730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.768793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.769076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.769124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.769430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.769496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.769756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.769807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.770114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.770164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.770469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.770536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.770824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.770876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.771165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.771214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.771471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.771539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.771838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.771888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.772149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.772215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.772527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.772595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.772849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.772900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.773208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.773275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.773509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.773575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.773871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.773922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.774218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.774285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.774580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.774654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.774879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.774948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.775247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.775313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.775608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.775677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.775993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.776061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.776363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.776429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.776674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.776723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.777033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.777084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-07-15 11:52:02.777388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-07-15 11:52:02.777455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.777749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.777800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.778118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.778192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.778485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.778553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.778858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.778909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.779199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.779265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.779514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.779581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.779828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.779878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.780089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.780155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.780447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.780513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.780802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.780852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.781146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.781212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.781471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.781539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.781832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.781882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.782179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.782246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.782539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.782605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.782872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.782923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.783221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.783287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.783585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.783652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.783919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.783970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.784278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.784346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.784605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.784672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.784972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.785041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.785347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.785414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.785698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.785757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.786051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.786100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.786403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.786469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.786779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.786829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.787113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.787162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.787478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.787545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.787814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.787864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.788120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.788188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.788482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.788539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.788857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-07-15 11:52:02.788907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-07-15 11:52:02.789210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.789277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.789474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.789541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.789842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.789892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.790149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.790215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.790513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.790580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.790822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.790890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.791187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.791255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.791544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.791610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.791919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.791989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.792232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.792300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.792550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.792617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.792924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.792992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.793311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.793381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.793678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.793727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.794040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.794113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.794399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.794467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.794780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.794830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.795069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.795119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.795371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.795439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.795677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.795726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.796017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.796068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.796360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.796427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.796731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.796794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.797094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.797143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.797441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.797508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.797809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.797861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.798163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.798212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.798483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.798549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.798840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.798890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.799194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.799262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.799549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.799615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.799870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.799938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.800231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.800280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.800584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.800652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-07-15 11:52:02.800957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-07-15 11:52:02.801008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.801272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.801339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.801633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.801699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.801998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.802048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.802300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.802373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.802625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.802692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.802999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.803049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.803348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.803416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.803677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.803725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.804044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.804093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.804391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.804457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.804761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.804811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.805068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.805134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.805429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.805496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.805757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.805808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.806111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.806186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.806484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.806551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.806847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.806898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.807200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.807266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.807563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.807630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.807901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.807951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.808217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.808286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.808509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.808574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.808853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.808903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.809151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.809218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.809477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.809544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.809847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.809915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.810167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.810236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.810482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.810548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.810783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.810855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.811105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.811175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.811495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.811562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.811828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-07-15 11:52:02.811898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-07-15 11:52:02.812168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.812236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.812476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.812542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.812838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.812907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.813211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.813278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.813523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.813572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.813823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.813893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.814209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.814285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.814579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.814628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.814899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.814968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.815263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.815330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.815610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.815659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.815916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.815993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.816245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.816312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.816605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.816672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.816957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.817026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.817323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.817638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.817687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.817933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.818008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.818267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.818334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.818645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.818713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.818938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.819005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.819282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.819348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.819619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.819691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.819905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.819973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.820267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.820333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.820531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.820599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.820858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.820929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.821233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.821303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.821549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.821617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.821860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.821930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.822187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.822253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.822553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.822619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.822891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-07-15 11:52:02.822960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-07-15 11:52:02.823250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.823316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.823608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.823657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.823894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.823963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.824264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.824332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.824565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.824633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.824868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.824937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.825235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.825301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.825557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.825624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.825883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.825952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.826254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.826321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.826565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.826614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.826889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.826958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.827263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.827329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.827597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.827646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.827856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.827926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.828226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.828293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.828597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.828663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.828967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.829035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.829330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.829406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.829674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.829724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.830001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.830080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.830339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.830406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.830695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.830758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.831053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.831119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.831368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.831435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.831681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.831730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.832042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.832109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.832404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-07-15 11:52:02.832471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-07-15 11:52:02.832766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.832816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.833091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.833159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.833464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.833827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.833895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.834205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.834254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.834515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.834582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.834874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.834926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.835215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.835281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.835583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.835650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.835950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.836001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.836304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.836370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.836669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.836748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.837011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.837060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.837338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.837403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.837715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.837776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.838069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.838119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.838416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.838483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.838790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.838841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.839089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.839155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.839402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.839468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.839712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.839774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.840029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.840078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.840370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.840437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.840728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.840792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.841070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.841119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.841406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.841473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.841764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.841815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.842063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.842112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.842416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.842482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.842776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.842827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.843116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.843174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.843431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.843801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.844132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.844182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.844477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.844546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.844828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-07-15 11:52:02.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-07-15 11:52:02.845126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.845193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.845441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.845507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.845795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.845846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.846142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.846511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.846579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.846868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.846918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.847210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.847277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.847570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.847636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.847941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.847992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.848257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.848325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.848635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.848702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.849009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.849092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.849399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.849465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.849764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.849815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.850064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.850114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.850413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.850481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.850634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.850683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.850920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.850971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.851228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.851295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.851509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.851576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.851824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.851893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.852150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.852200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.852487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.852536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.852833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.852883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.853124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.853192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.853488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.853555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.853812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.853862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.854060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.854135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.854365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.854432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.854651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.854700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.854950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.855019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.855252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.855319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.855539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.855588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.855811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.855882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.856094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.856169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.856352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.856418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.856640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.856688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.856904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.856972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.857166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-07-15 11:52:02.857215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-07-15 11:52:02.857383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.857432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.857593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.857642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.857862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.857914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.858195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.858243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.858457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.858506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.858704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.858773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.858980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.859030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.859227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.859276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.859494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.859544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.859811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.859885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.860100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.860167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.860367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.860435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.860626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.860676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.860866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.860936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.861126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.861192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.861410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.861458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.861685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.861735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.861928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.861997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.862151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.862200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.862390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.862438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.862615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.862664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.862836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.862885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.863083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.863133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.863332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.863381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.863555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.863611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.863804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.863855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.864074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.864124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.864390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.864439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.864671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.864720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.864976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.865043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.865278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.865344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.865545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.865603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.865852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.865921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.866264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.866331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.866602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.866651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.866853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-07-15 11:52:02.866928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-07-15 11:52:02.867152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.867218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.867507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.867556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.867814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.867883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.868102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.868169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.868450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.868518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.868791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.868841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.869027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.869095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.869418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.869485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.869700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.869760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.869962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.870030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.870281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.870348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.870603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.870669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.870932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.871001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.871298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.871366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.871652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.871701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.871938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.872006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.872271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.872320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.872554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.872622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.872886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.872955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.873304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.873370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.873663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.873712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.873946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.874014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.874361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.874438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.874717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.874780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.874974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.875025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.875193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.875264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.875515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.875584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.875815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.875886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.876164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.876229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.876498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.876564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.876809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.876882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.877125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.877192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.877525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-07-15 11:52:02.877593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-07-15 11:52:02.877832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.877900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.878123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.878190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.878480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.878546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.878800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.878868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.879068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.879134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.879430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.879497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.879734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.879802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.880059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.880108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.880368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.880417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.880706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.880777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.880999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.881067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.881322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.881633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.881682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.881931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.881998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.882237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.882305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.882537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.882603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.882872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.882940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.883198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.883264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.883521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.883587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.883845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.883912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.884182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.884509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.884576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.884814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.884888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.885090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.885157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.885437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.885505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.885761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.885821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.886020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.886087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.886356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.886424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.886709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.886986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.887057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.887290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.887357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.887645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.887710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.887942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.887991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.888281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.888349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.888543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.888607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-07-15 11:52:02.888799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-07-15 11:52:02.888871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.889092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.889158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.889380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.889446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.889677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.889726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.889970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.890038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.890368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.890435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.890718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.890779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.891019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.891086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.891440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.891509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.891728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.891792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.891979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.892028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.892308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.892383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.892650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.892718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.892980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.893029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.893266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.893332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.893619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.893684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.893854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.893904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.894137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.894204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.894473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.894540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.894776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.894827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.895006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.895072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.895420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.895489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.895707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.895767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.896033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.896081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.896392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.896460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.896719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.896781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.896998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.897047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.897324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.897373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.897656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.897722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.897952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.898001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.898227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.898293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.898559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.898627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.898848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.898899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.899134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.899202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.899409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.899474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.899696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.899757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.899964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.900032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.900303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.900369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.900660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.900709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-07-15 11:52:02.900953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-07-15 11:52:02.901021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.901248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.901315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.901549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.901615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.901884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.901952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.902146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.902212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.902444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.902510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.902775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.902827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.903064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.903131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.903368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.903434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.903699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.903757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.904041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.904088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.904263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.904329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.904588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.904655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.904903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.904971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.905209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.905276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.905471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.905538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.905759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.905809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.906086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.906152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.906397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.906462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.906686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.906735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.906997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.907065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.907273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.907341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.907609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.907676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.907947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.908017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.908291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.908357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.908617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.908683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.908985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.909054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.909325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.909392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.909654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.909703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.909944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.910011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.910237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.910304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.910530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.910598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.910831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.910899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.911142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.911208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.911460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.911527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.911760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-07-15 11:52:02.911810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-07-15 11:52:02.912036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.912103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.912370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.912436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.912653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.912702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.912998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.913055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.913320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.913387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.913603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.913652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.913910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.913979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.914217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.914283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.914556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.914625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.914904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.914972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.915331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.915381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.915634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.915683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.915949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.916018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.916215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.916282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.916579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.916646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.916938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.917007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.917326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.917396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.917698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.917757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.918062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.918129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.918422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.918489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.918824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.918875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.919107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.919174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.919473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.919540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.919810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.919861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.920123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.920189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.920481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.920845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.920895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.921196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.921261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.921595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.921668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.921945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.921996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.922305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.922372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.922663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.922731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.923024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.923074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.923304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.923370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.923661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.923710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.923981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.924031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.924307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.924374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.924608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.924676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-07-15 11:52:02.924963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-07-15 11:52:02.925031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.925329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.925396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.925641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.925690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.925981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.926030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.926292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.926358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.926617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.926674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.926988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.927058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.927359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.927426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.927656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.927705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.928036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.928105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.928393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.928460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.928759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.928810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.929071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.929138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.929407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.929474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.929773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.929824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.930016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.930064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.930324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.930391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.930684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.930770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.931074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.931122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.931427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.931494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.931832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.931882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.932182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.932249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.932468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.932535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.932808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.932858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.933059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.933125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.933390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.933457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.933761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.933811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.934023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.934072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-07-15 11:52:02.934334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-07-15 11:52:02.934400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.934674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.934767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.935066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.935115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.935387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.935454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.935757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.935808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.936042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.936092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.936336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.936401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.936605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.936674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.936962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.937012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.937307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.937373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.937648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.937715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.938012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.938061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.938228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.938295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.938482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.938550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.938839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.938889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.939183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.939232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.939459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.939527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.939767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.939824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.940130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.940203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.940407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.940675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.940724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.941079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.941160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.941479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.941545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.941803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.941854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.942106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.942174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.942435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.942790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.943142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.943209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.943511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.943579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.943818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.943869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.944122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.944188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.944439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.944489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.944761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.944811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.945071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.945137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.945349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.945417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.945661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.945709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.945920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.945970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-07-15 11:52:02.946201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-07-15 11:52:02.946250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.946548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.946628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.946914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.946965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.947260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.947328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.947596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.947663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.947936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.948003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.948311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.948379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.948643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.948692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.949002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.949070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.949406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.949476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.949734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.949797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.950026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.950096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.950407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.950774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.950825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.951012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.951061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.951265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.951332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.951622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.951690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.951931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.951981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.952242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.952309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.952568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.952634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.952827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.952884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.953089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.953160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.953412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.953477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.953699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.953771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.954046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.954113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.954374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.954441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.954754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.954805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.955090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.955140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.955401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.955467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.955677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.955726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.956041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.956107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.956312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.956667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.956734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.957026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.957076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.957400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.957467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.957731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.957795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-07-15 11:52:02.958094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-07-15 11:52:02.958143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.958433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.958502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.958837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.958889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.959192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.959258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.959550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.959619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.959869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.959919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.960156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.960223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.960402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.960467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.960755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.960805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.961105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.961154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.961435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.961502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.961774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.961825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.962069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.962120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.962388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.962454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.962681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.962730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.962987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.963036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.963267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.963335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.963507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.963575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.963751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.963801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.964030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.964098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.964337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.964404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.964689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.964749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.964958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.965025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.965309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.965375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.965673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.965730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.966008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.966353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.966420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.966673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.966722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.967017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.967067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.967301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.967368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.967600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.967668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.967978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.968029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.968311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.968377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.968654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.968703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.968945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.968995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.969292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.969368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.969620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.969669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.969955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.970006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.970271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.970339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.970580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-07-15 11:52:02.970649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-07-15 11:52:02.970902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.970970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.971201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.971268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.971507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.971575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.971819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.971890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.972184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.972250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.972546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.972614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.972861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.972930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.973169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.973237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.973540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.973608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.973905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.973973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.974227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.974302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.974610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.974659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.974979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.975048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.975361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.975413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.975717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.975780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.976024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.976092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.976369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.976438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.976692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.976753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.977064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.977435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.977504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.977816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.977886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.978180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.978247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.978621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.978910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.978961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.979338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.979628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.979695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.979997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.980049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.980351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.980420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.980708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.980774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.981009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.981060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.981360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.981431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.981714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.981778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.982010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.982061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.982351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.982682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.982732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.983046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.983096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.983324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.983391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.983690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.983773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.984039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.984091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.984331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.984397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-07-15 11:52:02.984696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-07-15 11:52:02.984760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.985056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.985105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.985360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.985426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.985633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.985681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.985998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.986078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.986375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.986442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.986702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.986764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.987006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.987054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.987343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.987411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.987633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.987682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.987985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.988035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.988303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.988371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.988677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.988759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.989020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.989069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.989366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.989433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.989698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.989761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.990019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.990068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.990327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.990394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.990628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.990693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.991004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.991054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.991346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.991413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.991670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.991719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.992025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.992075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.992374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.992442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.992695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.992766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.993027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.993076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.993323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.993389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.993677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.993725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.993991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.994040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.994349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.994416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.994631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.994680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.994979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.995029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.995325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.995392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.995639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-07-15 11:52:02.995689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-07-15 11:52:02.995995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.996045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.996289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.996354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.996652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.996719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.996983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.997033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.997305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.997371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.997655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.997704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.997964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.998014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.998226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.998294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.998529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.998593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.998849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.998919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.999204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.999252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.999521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.999588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:02.999862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:02.999896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.000111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.000183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.000502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.000570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.000829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.000863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.001072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.001105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.001316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.001350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.001510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.001544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.001725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.001793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.001952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.001985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.002228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.002296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.002552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.002619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.002906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.002940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.003145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.003213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.003423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.003494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.003697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.003758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-07-15 11:52:03.003949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-07-15 11:52:03.003983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.004177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.004247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.004457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.004526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.004706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.004776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.004938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.004974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.005172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.005239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.005495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.005543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.005774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.005827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.006006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.006037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.006180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.006212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.006379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.006411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.006622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.006654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.006820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.006854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.007008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.007040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.007222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.007254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.007425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.007458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.007704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.007747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.007939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.007972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.008196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.008229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.008484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.008517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.008709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.008752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.008932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.008964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.009146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.009203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.009407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.009440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.009664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.009697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.009857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.009889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.010086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.010382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.010449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.010699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.010760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.010943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.010975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.011243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4cae0 is same with the state(5) to be set 00:25:55.365 [2024-07-15 11:52:03.011598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.011697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.011958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.011998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.012203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.012263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.012469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.012544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.012807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.012840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.012952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.012985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.013175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-07-15 11:52:03.013237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-07-15 11:52:03.013439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.013501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.013757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.013828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.013984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.014016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.014184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.014246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.014480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.014543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.014802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.014835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.014955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.014999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.015176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.015244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.015553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.015615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.015817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.015850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.016023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.016055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.016212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.016245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.016421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.016696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.016728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.016908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.016940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.017147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.017224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.017524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.017586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.017839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.017874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.018015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.018048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.018176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.018214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.018408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.018471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.018734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.018812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.018996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.019031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.019215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.019247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.019445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.019664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.019696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.019840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.019873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.020019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.020051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.020201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.020233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.020442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.020504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.020715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.020803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.020992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.021041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.021247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.021297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.021541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.021604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.021837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.021874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.022020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.022053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.022262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.022326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.022515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.022577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.022820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.022854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-07-15 11:52:03.022978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-07-15 11:52:03.023017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.023223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.023285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.023524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.023591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.023803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.023837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.024020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.024053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.024232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.024289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.024522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.024585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.024806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.024840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.025008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.025065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.025283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.025316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.025545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.025808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.025844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.025996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.026029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.026322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.026355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.026615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.026677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.026903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.026936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.027170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.027231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.027539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.027608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.027844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.027877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.027995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.028027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.028222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.028260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.028496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.028568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.028811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.028844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.028998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.029218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.029292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.029521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.029582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.029789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.029823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.029945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.029978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.030126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.030186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.030443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.030505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.030717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.030810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.030964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.030997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.031145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.031195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.031388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.031450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.031695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.031774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.031938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.031972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.032092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.032124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.032303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.032374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.032595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.032627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.032800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.032845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.032956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-07-15 11:52:03.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-07-15 11:52:03.033221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.033282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.033544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.033607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.033854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.033889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.034054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.034116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.034342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.034403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.034626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.034684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.034916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.034949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.035095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.035147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.035298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.035331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.035493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.035549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.035794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.035827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.035940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.035972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.036184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.036247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.036453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.036508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.036703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.036773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.036917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.036949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.037099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.037151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.037401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.037460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.037660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.037716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.037931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.037970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.038180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.038238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.038477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.038780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.038814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.038929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.038962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.039111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.039143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.039290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.039322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.039502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.039559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.039816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.039849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.039978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.040023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.040276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.040336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.040540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.040596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.040816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.040849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.040995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.041175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.041231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.041442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.041762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.041828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.042034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.042060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.042214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.042253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.042423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.042479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.042713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.042811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-07-15 11:52:03.042964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-07-15 11:52:03.043000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.043174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.043224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.043431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.043488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.043689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.043762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.043931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.043964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.044130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.044187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.044416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.044474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.044643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.044700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.044886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.044918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.045117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.045175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.045422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.045490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.045708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.045806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.046014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.046078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.046320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.046379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.046681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.046770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.046945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.046978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.047163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.047220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.047463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.047520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.047721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.047799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.047932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.047970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.048132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.048189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.048430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.048487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.048695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.048766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.048972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.049029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.049203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.049259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.049492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.049549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.049807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.049869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.050141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.050198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.050429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.050486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.050715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.050785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.050997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.051054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.051262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-07-15 11:52:03.051319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-07-15 11:52:03.051572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.051630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.051837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.051895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.052155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.052213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.052423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.052481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.052747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.052809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.053029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.053086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.053336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.053394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.053608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.053664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.053887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.053953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.054187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.054249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.054505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.054566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.054820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.055151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.055212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.055469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.055530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.055766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.055854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.056116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.056179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.056418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.056488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.056796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.056858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.057140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.057202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.057466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.057526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.057683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.057715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.057921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.057981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.058232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.058265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.058392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.058424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.058642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.058700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.058924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.058982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.059244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.059464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.059502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.059731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.059805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.059989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.060047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.060292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.060349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.060556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.060612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.060797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.060856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.061084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.061140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.061383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.061450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.061663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.061720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.061948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.062006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.062255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.062313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.062569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.062631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.062885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-07-15 11:52:03.062943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-07-15 11:52:03.063162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.063219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.063453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.063512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.063736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.063842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.064052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.064109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.064290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.064347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.064550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.064615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.064795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.064854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.065122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.065184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.065406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.065468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.065689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.065772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.065994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.066057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.066309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.066370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.066627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.066689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.066937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.067000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.067260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.067322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.067599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.067661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.067920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.067984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.068206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.068276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.068514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.068576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.068809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.068874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.069133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.069195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.069408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.069470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.069732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.069831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.070057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.070120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.070338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.070401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.070617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.070678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.070913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.070976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.071192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.071263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.071523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.071584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.071838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.071901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.072151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.072213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.072464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.072526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.072772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.072836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.073035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.073096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.073336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.073398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.073642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.073703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.073947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.074011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.074227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.074297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.074483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.074545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.074784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.074848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.075090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-07-15 11:52:03.075151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-07-15 11:52:03.075404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.075466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.075678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.075753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.075962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.076023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.076268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.076330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.076574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.076636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.076881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.076945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.077201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.077512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.077839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.077906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.078149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.078212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.078508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.078569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.078822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.078885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.079133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.079195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.079448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.079517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.079752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.079816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.080025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.080087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.080312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.080382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.080692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.080767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.080990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.081052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.081276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.081337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.081599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.081661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.081946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.082011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.082239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.082302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.082514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.082576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.082804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.082869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.083092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.083155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.083338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.083410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.083657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.083719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.083965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.084028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.084267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.084328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.084573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.084635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.084858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.084923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.085133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.085194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.085414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.085476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.085681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.085761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.085979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.086043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.086311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.086374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.086586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.086648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.086857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.086922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.087178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.087240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-07-15 11:52:03.087470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-07-15 11:52:03.087533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.087799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.087864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.088064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.088126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.088349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.088412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.088655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.088716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.088926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.089257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.089320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.089595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.089656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.089898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.089962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.090176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.090238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.090490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.090551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.090783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.090852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.091082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.091145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.091416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.091479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.091706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.091996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.092058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.092329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.092391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.092640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.092702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.092957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.093021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.093304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.093367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.093626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.093687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.093891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.093954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.094176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.094239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.094484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.094812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.094878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.095116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.095179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.095418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.095490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.095748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.095820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.096042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.096106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.096318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.096380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.096622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.096683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.096895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.096957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.097192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.097253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.097498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.097561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.097805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.097868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-07-15 11:52:03.098079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-07-15 11:52:03.098141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.098350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.098661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.098723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.098950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.099015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.099224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.099285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.099522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.099585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.099806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.099871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.100071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.100134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.100375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.100682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.100759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.100977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.101039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.101272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.101334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.101595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.101656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.101913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.101977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.102196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.102257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.102502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.102564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.102779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.102851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.103075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.103138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.103401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.103464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.103717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.103810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.104062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.104124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.104381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.104443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.104683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.104774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.104976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.105038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.105297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.105358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.105612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.105673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.105923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.105986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.106214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.106276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.106558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.106620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.106868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.106931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.107154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.107216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.107499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.107570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.107819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.107891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.108132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.108196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.108423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.108490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.108772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.108835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.109051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.109113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.109360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.109422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.109680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.109756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.109956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.110017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-07-15 11:52:03.110309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-07-15 11:52:03.110371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.110614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.110675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.110890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.110953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.111187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.111253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.111524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.111586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.111815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.111882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.112135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.112197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.112430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.112491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.112713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.112789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.113035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.113098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.113349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.113417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.113671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.113733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.113937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.113999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.114244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.114305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.114542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.114613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.114813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.114876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.115179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.115241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.115563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.115625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.115898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.115965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.116208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.116271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.116527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.116589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.116784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.116848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.117073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.117135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.117383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.117445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.117721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.117816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.118010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.118071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.118289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.118351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.118600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.118662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.118903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.118966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.119144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.119205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.119456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.119518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.119798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.119874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.120088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.120151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.120379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.120441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.120664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.120726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.120986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.121049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.121241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.121302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.121555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.121618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.121828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.121892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.122106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.122167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.122411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.122472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-07-15 11:52:03.122704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-07-15 11:52:03.122780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.122998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.123059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.123322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.123385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.123633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.123695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.123952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.124017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.124276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.124338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.124590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.124651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.124855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.124919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.125160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.125222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.125497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.125558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.125797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.125862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.126106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.126168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.126355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.126417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.126637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.126698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.126903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.126966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.127148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.127210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.127481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.127543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.127809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.128117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.128180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.128427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.128494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.128708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.128782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.128998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.129061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.129347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.129409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.129642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.129703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.129941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.130002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.130255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.130317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.130569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.130631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.130841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.130904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.131116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.131177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.131366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.131428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.131640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.131712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.131946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.132010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.132260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.132333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.132573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.132635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.132884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.132948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.133226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.133288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.133508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.133570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.133794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.133859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.134115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.134177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.134425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.134488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.134729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-07-15 11:52:03.134803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-07-15 11:52:03.135053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.135116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.135386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.135459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.135664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.135725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.135963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.136026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.136290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.136352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.136610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.136671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.136895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.136958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.137256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.137318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.137503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.137565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.137780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.137844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.138061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.138122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.138384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.138447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.138694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.138768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.139003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.139065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.139271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.139333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.139611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.139672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.139914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.139989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.140218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.140521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.140791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.140855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.141143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.141442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.141504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.141724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.141803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.142000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.142061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.142321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.142383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.142603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.142665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.142912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.142975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.143212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.143274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.143514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.143576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.143797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.143863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.144118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.144181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.144426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.144487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.144668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.144729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.144982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.145044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.145267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.145571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.145641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.145856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.145919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.146172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.146233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.146482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.146544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-07-15 11:52:03.146809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-07-15 11:52:03.146872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.147123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.147184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.147407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.147710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.147811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.147947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.147981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.148134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.148167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.148371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.148594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.148628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.148837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.148871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.148997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.149180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.149212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.149372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.149405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.149553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.149615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.149795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.149845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.149979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.150181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.150418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.150581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.150796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.150953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.150986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.151163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.151195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.151374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.151406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.151606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.151639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.151764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.151799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.151928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.151960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.152105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.152284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.152447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.152664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.152865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.152987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.153020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.153221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.153254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.153435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.153468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.153612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.153645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.153825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.153856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-07-15 11:52:03.153972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-07-15 11:52:03.154005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.154155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.154186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.154330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.154546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.154576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.154688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.154719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.154862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.154893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.155057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.155264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.155408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.155619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.155971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.156193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.156422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.156592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.156753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.156916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.156948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.157130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.157161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.157376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.157558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.157589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.157750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.157797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.157924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.157953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.158111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.158300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.158441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.158610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.158838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.158998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.159142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.159321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.159526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.159695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.159866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.159897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.160924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.160953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.161108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.161137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.161319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.161347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-07-15 11:52:03.161496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-07-15 11:52:03.161524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.161667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.161696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.161828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.161857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.162912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.162940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.163092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.163120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.163270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.163299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.163449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.163478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.163652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.163680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.163863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.163892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.164889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.164917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.165892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.165920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.166119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.166146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.166290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.166318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.166491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.166519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.166690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.166718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.166887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.166913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.167897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.167924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.168113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.168140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.168308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.168334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.168506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.168532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-07-15 11:52:03.168703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-07-15 11:52:03.168729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.168904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.168931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.169971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.169998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.170179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.170337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.170477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.170685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.170848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.170988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.171180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.171356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.171555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.171764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.171963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.171995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.172129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.172154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.172322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.172348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.172494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.172524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.172694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.172720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.172875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.172900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.173880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.173907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.174971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.174996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.175164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.175189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.175323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.175358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.175514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.175538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.175641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.175666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.175855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-07-15 11:52:03.175882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-07-15 11:52:03.176005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.176194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.176359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.176526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.176699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.176912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.176937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.177110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.177269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.177690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.177841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.177971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.178151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.178278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.178482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.178667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.178925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.178950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.179122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.179146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.179310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.179334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.179468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.179492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.179667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.179695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.179854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.180951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.180975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.181125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.181149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.181298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.181322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.181570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.181594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.181830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.181981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.182006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.182153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.182177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-07-15 11:52:03.182363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-07-15 11:52:03.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.182526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.182550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.182701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.182745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.182884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.182908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.183102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.183125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.183273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.183296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.183484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.183507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.183657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.183680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.183850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.183876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.184966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.184991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.185180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.185203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.185386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.185410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.185561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.185584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.185810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.185835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.186039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.186061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.186217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.186240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.186408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.186432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.186595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.186618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.186844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.186869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.187050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.187073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.187248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.187271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.187487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.187514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.187671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.187693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.187970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.188183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.188332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.188501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.188693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.188882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.188907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.189961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.189986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.190128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.190155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-07-15 11:52:03.190254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-07-15 11:52:03.190279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.190508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.190532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.190694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.190733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.190890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.190915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.191149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.191174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.191373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.191397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.191585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.191609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.191852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.191878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.192887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.193085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.193117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.193374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.193438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.193715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.193784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.193932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.193957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.194098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.194309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.194367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.194539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.194585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.194767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.194821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.194962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.194987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.195157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.195189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.195480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.195512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.195645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.195700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.195910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.195936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.196063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.196095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.196265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.196329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.196525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.196558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.196813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.196839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.197068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.197100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.197229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.197290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.197511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.197557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.197758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.197804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.197936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.197960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.198095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.198119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.198229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.198253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.198392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.198456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.198655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.198687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.198863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-07-15 11:52:03.198889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-07-15 11:52:03.199043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.199076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.199274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.199306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.199460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.199492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.199645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.199706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.199940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.199966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.200081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.200113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.200298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.200330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.200489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.200521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.200723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.200792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.200931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.200955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.201946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.201970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.202117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.202149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.202335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.202367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.202503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.202550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.202755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.202799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.202911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.202950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.203094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.203139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.203423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.203455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.203682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.203756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.203908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.203937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.204080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.204141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.204417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.204448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.204644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.204676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.204895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.204919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.205062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.205085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.205297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.205360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.205625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.205657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.205850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.205876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.206043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.206108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.206321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.206353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.206541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.206587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.206838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.206864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.207018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.207071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.207310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.207342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.207601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.207647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-07-15 11:52:03.207819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-07-15 11:52:03.207846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.207987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.208166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.208365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.208613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.208800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.208958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.208983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.209110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.209156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.209342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.209388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.209570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.209617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.209798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.209822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.209945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.209970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.210145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.210199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.210482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.210528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.210815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.210840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.210974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.210999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.211187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.211219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.211377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.211441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.211658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.211704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.211942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.211968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.212115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.212179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.212432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.212464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.212657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.212703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.212907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.212932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.213087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.213115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.213369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.213431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.213613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.213659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.213867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.213922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.214135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.214167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.214429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.214491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.214723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.214786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.215004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.215036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.215289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.215351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.215639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-07-15 11:52:03.215945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-07-15 11:52:03.216009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.216209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.216272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.216470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.216502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.216641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.216673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.216859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.216925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.217168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.217231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.217479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.217511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.217692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.217724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.218966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.218992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.219244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.219276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.219508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.219764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.219833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.220052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.220085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.220291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.220357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.220632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.220677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.221002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.221036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.221255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.221287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.221485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.221537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.221746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.221801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.221974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.222036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.222303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.222335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.222543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.222576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.222806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.222858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.223105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.223133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.223305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.223332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.223505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.223542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.223699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.223731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.223950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.223978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.224138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.224170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.224355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.224387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.224542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.224588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.224865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.224893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.225098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.225145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.225427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.225459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.225609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.225674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.225914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-07-15 11:52:03.225942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-07-15 11:52:03.226171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.226234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.226501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.226533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.226692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.226751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.226987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.227033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.227168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.227200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.227423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.227454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.227608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.227654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.227938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.227988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.228287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.228319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.228511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.228543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.228725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.228797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.229059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.229123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.229344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.229407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.229582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.229614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.229803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.229872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.230140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.230173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.230438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.230502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.230755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.230788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.231008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.231069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.231282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.231315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.231488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.231552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.231806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.231841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.232161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.232225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.232480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.232513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.232643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.232676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.232883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.232916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.233141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.233173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.233383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.233436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.233575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.233607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.233788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.233844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.234000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.234033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.234204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.234251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.234490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.234537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.234776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.234823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.234992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.235025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.235194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.235257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.235479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.235532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.235756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.235809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.236011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.236058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.236289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.236322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.236470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.236503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.236774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.236823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.237021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.237052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.237305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.237517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.237548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.237724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.237766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.237949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.237981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.238133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.238164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.238325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.238356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.238525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.238577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.238828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.238860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.239044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.239077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.239221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.239278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.239451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.239498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-07-15 11:52:03.239710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-07-15 11:52:03.239790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.239950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.239982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.240178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.240225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.240440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.240487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.240697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.240785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.240950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.240981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.241162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.241227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.241417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.241480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.241663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.241712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.241915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.241946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.242130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.242194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.242480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.242544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.242816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.242842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.242975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.243195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.243369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.243568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.243720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.243873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.244967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.244992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.245203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.245226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.245438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.245461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.245707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.245754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.245903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.245927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.246157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.246180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.246342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.246366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.246591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.246615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.246793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.246819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.246918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.246943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.247116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.247155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.247344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.247367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.247566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.247590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.247750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.247793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.247923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.247948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.248178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.248216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.248411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.248436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.248622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.248646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.248796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.248822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.248944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.248969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.249126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.249163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.249372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.249395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.249616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.249640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.249848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.249873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.250034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.250058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.250231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.250255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.250428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.250452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.250702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.250749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.250882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.251960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.251985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.252179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.252203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.252417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.252441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.252638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.252665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.252807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.252833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.252968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.252993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.253156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.253326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.253451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.253641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-07-15 11:52:03.253799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-07-15 11:52:03.253963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.253988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.254179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.254204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.254458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.254482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.254684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.254708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.254858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.254883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.255024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.255048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.255236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.255260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.255456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.255481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.255667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.255691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.255942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.255968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.256133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.256157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.256328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.256352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.256516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.256540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.256711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.256748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.256921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.256946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.257137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.257161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.257348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.257379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.257544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.257567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.257769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.257810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.257948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.257973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.258086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.258338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.258362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.258581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.258605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.258755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.258795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.259910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.259935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.260124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.260147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.260298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.260321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.260563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.260587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.260841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.260867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.261099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.261122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.261288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.261311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.261527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.261566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.261694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.261719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.261945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.261970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.262120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.262144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.262278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.262318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.262505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.262529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.262673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.262712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.262874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.262900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.263894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.263919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.264959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.264984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.265204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.265227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.265407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.265431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.265591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.265615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.265749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.265777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.265903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.265928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.266952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.266977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.267954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.267979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.268114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.268153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.268269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.268293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.268432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-07-15 11:52:03.268457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-07-15 11:52:03.268586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.268611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.268772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.268798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.268936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.268961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.269136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.269159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.269294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.269334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.269506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.269545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.269670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.269709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.269896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.269922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.270126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.270338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.270513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.270690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.270853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.270977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.271959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.271985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.272144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.272170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.272293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.272333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.272500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.272542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.272711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.272747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.272890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.272915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.273915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.273940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.274882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.274986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.275010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.275171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.275318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.275343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.275820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.275849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.275973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.275998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.276109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.276134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.276263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.276287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.276432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.276457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.276659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.276684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.276879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.276920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.277972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.277997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.278970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.278995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.279106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.279131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.279294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.279319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.279485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.279508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.279655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.279701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.279857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.279883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.280956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.280981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.281161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.281186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.281370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.281422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.281577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.281609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.281786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.281812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.281939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.281964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.282137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.282186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.282361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.282393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.282543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.282577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-07-15 11:52:03.282734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-07-15 11:52:03.282787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.282898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.282923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.283106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.283145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.283291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.283343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.283509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.283542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.283691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.283724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.283863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.283888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.284910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.284935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.285967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.285995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.286968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.286992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.287151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.287175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.287345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.287377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.287513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.287546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.287731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.287761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.287874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.287899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.288907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.288932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.289118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.289263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.289449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.289627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.289813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.289989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.290154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.290348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.290521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.290708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.290923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.290955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.291103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.291192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.291334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.291384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.291522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.291554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.291680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.291712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.291865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.291897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.292930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.292981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.293104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.293159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.293299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.293331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.293476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.293508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.293652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.293684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.293882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.293934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.294062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.294115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.294292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.294342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.294515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.294548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.294690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.294723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.294904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.294937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.295079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.295112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.295221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.295254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.295429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.295462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-07-15 11:52:03.295608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-07-15 11:52:03.295640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.295798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.295852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.296886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.296919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.297965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.297997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.298931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.298962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.299104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.299136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.299284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.299495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.299526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.299672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.299704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.299855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.299888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.300934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.301955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.301987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.302142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.302313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.302480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.302622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.302782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.302969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.303174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.303352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.303554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.303721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.303905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.303938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.304139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.304315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.304518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.304671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.304849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.304981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.305033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.305229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.305281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.305425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.305458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.305595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.305627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.305794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.305852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.305991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.306042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.306191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.306222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.306358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.306390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.306567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.306600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.306758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.306792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.306979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.307161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.307355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.307527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.307732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.307935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.307986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.308177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.308227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.308351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.308407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.308549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.308582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.308768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.308795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.308932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.308957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.309118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.309144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.309246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.309270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.309396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.309552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-07-15 11:52:03.309577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-07-15 11:52:03.309714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.309745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.309892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.309935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.310078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.310265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.310464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.310633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.310810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.310974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.311173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.311391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.311535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.311743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.311897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.311929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.312076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.312109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.312256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.312288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.312467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.312499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.312654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.312685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.312867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.313944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.313973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.314125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.314276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.314426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.314605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.314827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.314968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.315163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.315376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.315588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.315742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.315930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.315957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.316137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.316169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.316319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.316351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.316483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.316658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.316690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.316852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.316882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.317953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.317982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.318135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.318166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.318337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.318368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.318514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.318546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.318717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.318778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.318932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.318960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.319136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.319185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.319326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.319358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.319501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.319533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.319712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.319752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.319881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.319909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.320095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.320272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.320483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.320660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.320839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.320998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.321165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.321402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.321601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.321792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.321956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.321984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.322165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.322198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.322346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.322378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.322546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-07-15 11:52:03.322586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-07-15 11:52:03.322733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-07-15 11:52:03.322786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-07-15 11:52:03.322952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-07-15 11:52:03.322979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-07-15 11:52:03.323093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.323153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.323353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.323386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.323507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.323538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.323684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.323717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.323886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.323915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.324078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.324131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.324279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.324311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.324457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.324490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.324664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.324699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-15 11:52:03.324874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.683 [2024-07-15 11:52:03.324906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.325107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.325292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.325483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.325657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.325849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.325987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.326020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.326223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.326276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.326412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.326445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.326581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.326614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.326768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.326813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.326987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.327888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.327987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.328161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.328299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.328498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.328647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.328858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.328887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.329103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.329301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.329506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.329839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.329976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.330009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.330167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.330199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.330348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.330380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.330522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.684 [2024-07-15 11:52:03.330554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-15 11:52:03.330742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.330775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.330928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.330956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.331107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.331277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.331454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.331658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.331847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.331984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.332026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.332172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.332204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.332375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.332407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.332591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.332624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.332805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.332835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.333911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.333940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.334083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.334136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.334316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.334367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.334506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.334538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.334712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.334753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.334913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.334942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.335122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.335155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.335305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.335337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.335482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.335515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.335683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.335850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.335878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.336049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.336080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.336261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.336312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.336441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.336473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.685 [2024-07-15 11:52:03.336614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.685 [2024-07-15 11:52:03.336646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.336791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.336820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.336958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.336987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.337138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.337169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.337354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.337406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.337552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.337591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.337747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.337955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.337983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.338139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.338194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.338372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.338405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.338573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.338605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.338754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.338786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.338944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.338972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.339161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.339384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.339416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.339594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.339627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.339793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.339821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.339958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.339986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.340183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.340234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.340373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.340405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.340524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.340555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.340732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.340786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.340957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.340986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.341148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.341180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.341324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.341355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.341533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.341565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.341673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.341705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.341869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.341897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.342006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.342057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.342237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.342289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.342463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.342495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.686 [2024-07-15 11:52:03.342668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.686 [2024-07-15 11:52:03.342700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.686 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.342888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.342917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.343106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.343166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.343283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.343338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.343487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.343520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.343664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.343696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.343872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.343900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.344960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.344989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.345174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.345225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.345349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.345381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.345554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.345587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.345759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.345968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.345997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.346194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.346247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.346422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.346454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.346628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.346660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.346812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.346842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.346975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.347172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.347395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.347563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.347748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.347933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.347961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.348109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.348142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.348312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.348344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.348455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.348488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.348632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.348664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.687 qpair failed and we were unable to recover it. 00:25:55.687 [2024-07-15 11:52:03.348844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.687 [2024-07-15 11:52:03.348873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.349013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.349061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.349243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.349293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.349474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.349506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.349641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.349674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.349852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.349881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.350041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.350073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.350216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.350269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.350415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.350448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.350618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.350656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.350825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.350854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.351068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.351418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.351622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.351818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.351982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.352029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.352216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.352269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.352441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.352473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.352642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.352675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.352816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.352846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.352980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.353178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.353349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.353556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.353762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.353933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.353985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.354158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.354209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.354381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.354560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.354592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.354709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.354747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.354900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.688 [2024-07-15 11:52:03.354952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.688 qpair failed and we were unable to recover it. 00:25:55.688 [2024-07-15 11:52:03.355104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.355156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.355302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.355335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.355481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.355514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.355654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.355686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.355848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.355882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.355994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.356140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.356345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.356554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.356729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.356949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.356982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.357138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.357365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.357416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.357563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.357596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.357753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.357786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.357969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.358021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.358151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.358201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.358371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.358407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.358563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.358596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.358767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.358800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.358964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.359206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.359404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.359547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.359703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.689 qpair failed and we were unable to recover it. 00:25:55.689 [2024-07-15 11:52:03.359923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.689 [2024-07-15 11:52:03.359974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.360152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.360376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.360407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.360567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.360751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.360954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.360986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.361120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.361172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.361315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.361346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.361483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.361515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.361665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.361696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.361815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.361846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.362906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.362938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.363941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.363995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.364168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.364199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.364344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.364375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.364548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.364580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.364713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.364783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.364980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.365180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.365372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.365545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.365753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.365907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.365943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.366118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.366149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.690 [2024-07-15 11:52:03.366287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.690 [2024-07-15 11:52:03.366319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.690 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.366492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.366525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.366670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.366701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.366878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.366910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.367951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.367982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.368116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.368148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.368293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.368324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.368475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.368657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.368690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.368878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.369115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.369279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.369478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.369632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.369839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.369987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.370019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.370196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.370228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.370414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.370447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.370620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.370651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.370801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.371051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.371102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.371255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.371306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.371476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.371509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.371649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.371680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.371840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.371893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.372050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.372104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.372270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.372302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.372442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.372474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.372642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.372674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.372824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.372858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.373029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.373061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.373203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.691 [2024-07-15 11:52:03.373234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.691 qpair failed and we were unable to recover it. 00:25:55.691 [2024-07-15 11:52:03.373368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.373401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.373543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.373581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.373728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.373914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.373946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.374091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.374122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.374293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.374325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.374440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.374471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.374642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.374840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.374899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.375084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.375137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.375332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.375384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.375535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.375567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.375743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.375776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.375927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.375978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.376138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.376191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.376355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.376406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.376558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.376591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.376710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.376764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.376878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.376909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.377076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.377107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.377276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.377308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.377477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.377509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.377650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.377682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.377858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.377891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.378060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.378132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.378330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.378384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.378555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.378587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.378732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.378774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.378914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.378968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.379142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.379174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.379347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.379379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.379521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.379554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.379729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.379770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.379934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.379985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.380131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.380187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.380361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.380393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.692 qpair failed and we were unable to recover it. 00:25:55.692 [2024-07-15 11:52:03.380529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.692 [2024-07-15 11:52:03.380561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.380707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.380760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.380933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.380965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.381124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.381157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.381315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.381368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.381514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.381551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.381694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.381726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.381877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.381910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.382946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.382977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.383123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.383155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.383304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.383336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.383508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.383539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.383678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.383711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.383900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.383935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.384090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.384122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.384295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.384326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.384467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.384500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.384612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.384643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.384822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.384878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.385082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.385470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.385614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.385791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.385978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.386029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.386215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.386268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.386421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.386453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.386635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.386666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.386825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.386876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.387009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.387064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.387258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.387309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.387488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.387520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.387663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.387694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.387852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.387904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.693 [2024-07-15 11:52:03.388055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.693 [2024-07-15 11:52:03.388107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.693 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.388272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.388323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.388493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.388525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.388672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.388703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.388882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.388915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.389084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.389117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.389287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.389325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.389459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.389490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.389642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.389674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.389880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.389932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.390069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.390122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.390309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.390362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.390510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.390541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.390687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.390719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.390868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.390901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.391043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.391075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.391222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.391254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.391421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.391454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.391597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.391629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.391800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.391833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.392920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.393110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.393142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.393256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.393288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.393464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.393496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.393620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.393653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.393829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.393881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.394955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.394988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.395166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.395197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.395368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.395399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.395547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.395580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.395782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.395817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.395959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.396169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.396364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.396562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.396730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.396917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.396980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.397166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.397218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.397384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.397417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.397565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.397596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.397716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.397928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.397960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.398074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.398107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.398256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.398288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.694 [2024-07-15 11:52:03.398408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.694 [2024-07-15 11:52:03.398440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.694 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.398613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.398646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.398789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.398821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.398962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.398994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.399139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.399171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.399339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.399371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.399547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.399578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.399756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.399788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.399923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.399978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.400098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.400156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.400336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.400367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.400491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.400522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.400695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.400865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.400922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.401071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.401123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.401290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.401321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.401496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.401528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.401648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.401681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.401845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.401900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.402048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.402080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.402253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.402285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.402454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.402486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.402626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.402657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.402814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.402871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.403023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.403073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.403221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.403274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.403443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.403475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.403651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.403683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.403822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.403880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.404055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.404088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.404277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.404329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.404502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.404534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.404674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.404712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.404916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.404968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.405114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.405166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.405281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.405339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.405514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.405546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.405717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.405758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.405896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.405949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.406137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.406189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.406375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.406426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.406565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.406597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.406709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.406750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.406935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.406989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.407141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.407193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.407349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.407401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.407580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.407612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.407731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.407772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.407947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.407979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.408156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.408188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.408330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.408381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.408525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.408557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.408697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.408729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.408867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.695 [2024-07-15 11:52:03.408900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.695 qpair failed and we were unable to recover it. 00:25:55.695 [2024-07-15 11:52:03.409038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.409213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.409245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.409411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.409443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.409588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.409620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.409792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.409826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.409978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.410156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.410303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.410474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.410677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.410925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.411160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.411194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.411368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.411401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.411557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.411589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.411747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.411780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.411954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.411986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.412134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.412192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.412359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.412406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.412602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.412658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.412848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.412881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.413064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.413110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.413311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.413357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.413533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.413579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.413764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.413813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.413956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.413988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.414165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.414197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.414336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.414386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.414526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.414772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.414806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.414940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.414972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.415104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.415151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.415359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.415406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.415589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.415636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.415812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.415845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.696 [2024-07-15 11:52:03.416005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.696 [2024-07-15 11:52:03.416037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.696 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.416207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.416253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.416451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.416498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.416668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.416700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.416875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.416908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.417075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.417137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.417370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.417432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.417643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.417705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.417908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.417941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.418088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.418120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.418265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.418311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.418478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.418525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.418721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.418765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.418937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.418969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.419097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.419144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.419347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.419393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.419544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.419590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.419795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.419828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.419937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.419969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.420116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.420174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.420375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.420421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.420649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.420695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.420919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.420951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.421117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.421163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.421348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.421405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.421587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.421637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.421808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.421840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.421982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.697 [2024-07-15 11:52:03.422014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.697 qpair failed and we were unable to recover it. 00:25:55.697 [2024-07-15 11:52:03.422197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.422246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.422456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.422505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.422682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.422714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.422893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.422927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.423079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.423124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.423326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.423372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.423520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.423765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.423813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.423983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.424029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.424239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.424285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.424442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.424489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.424667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.424717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.424937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.424986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.425171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.425220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.425398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.425448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.425662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.425711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.425938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.425987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.426206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.426255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.426439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.426488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.426644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.426694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.426906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.426956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.427149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.427199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.427407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.427456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.427647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.427696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.427927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.427990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.428229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.428293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.428517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.428816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.428879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.429101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.429162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.429361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.429418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.429654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.429716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.429976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.430034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.430267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.430323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.430540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.430602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.698 [2024-07-15 11:52:03.430845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.698 [2024-07-15 11:52:03.430908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.698 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.431141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.431198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.431391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.431463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.431668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.431729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.431959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.432021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.432250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.432313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.432511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.432572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.432834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.432898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.433125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.433187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.433417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.433478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.433713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.433791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.434032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.434081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.434267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.434315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.434519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.434568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.434761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.434811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.435022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.435071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.435265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.435314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.435519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.435568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.435750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.435800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.436017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.436066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.436218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.436268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.436470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.436520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.436678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.436727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.436956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.437007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.437197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.437247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.437435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.437664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.437713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.437875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.437925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.438104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.438153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.438338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.438388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.438545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.438623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.438825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.438876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.439082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.439132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.439317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.439365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.699 qpair failed and we were unable to recover it. 00:25:55.699 [2024-07-15 11:52:03.439551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.699 [2024-07-15 11:52:03.439600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.439809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.439859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.440067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.440304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.440354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.440550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.440611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.440826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.440918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.441114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.441176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.441398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.441459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.441687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.441772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.441951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.442001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.442182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.442231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.442424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.442474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.442686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.442736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.442934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.442983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.443158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.443207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.443386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.443434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.443640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.443689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.443880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.443930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.444113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.444162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.444350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.444399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.444560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.444609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.444800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.444851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.445037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.445087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.445271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.445321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.445529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.445762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.445811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.446020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.446069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.446247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.446296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.446477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.446538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.446751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.446822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.447028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.447077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.447266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.447315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.447499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.447549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.447761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.447810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.447995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.700 [2024-07-15 11:52:03.448044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.700 qpair failed and we were unable to recover it. 00:25:55.700 [2024-07-15 11:52:03.448245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.448295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.448495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.448544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.448697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.448762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.448950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.448999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.449178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.449227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.449401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.449450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.449626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.449675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.449867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.449918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.450127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.450177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.450370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.450419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.450623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.450672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.450835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.450886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.451036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.451086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.451289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.451338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.451546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.451597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.451784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.451835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.452053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.452115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.452333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.452396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.452644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.452899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.452948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.453152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.453202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.453354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.453435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.453635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.453684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.453876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.453926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.454071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.454147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.454311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.454363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.454526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.454579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.454801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.454854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.455038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.455091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.455261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.455496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.455548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.455728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.455790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.456005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.456058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.456282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.456336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.701 qpair failed and we were unable to recover it. 00:25:55.701 [2024-07-15 11:52:03.456502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.701 [2024-07-15 11:52:03.456555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.456774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.456827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.457027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.457080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.457277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.457331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.457544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.457597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.457794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.458063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.458125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.458285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.458338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.458504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.458558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.458780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.458834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.459047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.459100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.459291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.459344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.459507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.459560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.459785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.459840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.460003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.460056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.460217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.460270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.460428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.460480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.460694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.460760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.460928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.460981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.461164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.461216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.461383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.461435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.461647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.461700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.461871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.461924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.462140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.462193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.462381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.462434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.462604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.462656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.462890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.462945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.463125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.463178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.463326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.463379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.463592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.463645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.463858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.463912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.464114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.464167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.464375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.464428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.464660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.464951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.465004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.465172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.465224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.465434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.465487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.465716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.465810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.466000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.466053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.466274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.466514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.466567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.466785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.466840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.467051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.702 [2024-07-15 11:52:03.467104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.702 qpair failed and we were unable to recover it. 00:25:55.702 [2024-07-15 11:52:03.467322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.467375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.467560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.467612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.467782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.467837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.468049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.468115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.468340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.468393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.468608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.468661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.468892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.468946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.469158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.469403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.469456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.469648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.469921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.469984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.470211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.470273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.470518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.470580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.470817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.470881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.471116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.471178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.471405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.471468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.471716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.471805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.472007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.472060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.472250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.472303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.472469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.472549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.472772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.472826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.473022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.473076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.473245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.473300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.473508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.473760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.473815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.474006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.474061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.474253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.474306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.474465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.474518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.474682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.474736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.474989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.475044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.475275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.475329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.475515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.475569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.475768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.475823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.476012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.476067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.476237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.476291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.476484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.476538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.476702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.476765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.476957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.477011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.477199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.477252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.477464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.477519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.477704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.477770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.477960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.478013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.478197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.478250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.478464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.478530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.478732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.478801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.479028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.479085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.479307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.479364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.479573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.479626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.479835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.479889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.480080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.480133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.480291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.480344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.703 qpair failed and we were unable to recover it. 00:25:55.703 [2024-07-15 11:52:03.480530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.703 [2024-07-15 11:52:03.480583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.480771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.480826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.481013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.481067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.481271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.481324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.481532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.481585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.481771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.481825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.482025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.482077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.482263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.482316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.482511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.482564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.482724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.482829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.483069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.483130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.483360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.483422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.483666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.483729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.483967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.484029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.484275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.484338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.484580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.484641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.484887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.484950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.485195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.485257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.485505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.485566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.485817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.485882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.486112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.486174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.486422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.486484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.486730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.486803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.487016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.487069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.487290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.487343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.487563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.487616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.487772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.487825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.488034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.488091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.488309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.488366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.488560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.488616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.488849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.488907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.489138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.489192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.489405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.489466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.489683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.489746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.489940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.489993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.490175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.490227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.490408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.490461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.490680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.490732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.490978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.491032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.491220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.491273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.491487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.491539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.491727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.491794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.492019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.492072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.492284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.492340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.492505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.492562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.492761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.492819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.493039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.493096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.493322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.493378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.493623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.493681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.493916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.493976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.494179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.704 [2024-07-15 11:52:03.494236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.704 qpair failed and we were unable to recover it. 00:25:55.704 [2024-07-15 11:52:03.494463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.494519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.494710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.494783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.495017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.495075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.495295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.495352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.495550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.495607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.495787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.495846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.496044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.496100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.496298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.496355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.496583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.496641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.496867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.496926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.497122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.497178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.497349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.497406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.497624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.497682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.497867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.497925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.498146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.498203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.498415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.498476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.498733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.498825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.499050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.499107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.499304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.499360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.499552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.499609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.499803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.499862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.500056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.500122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.500318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.500664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.500885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.500943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.501111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.501168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.501357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.501413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.501632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.501689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.501921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.501980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.502182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.502238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.502410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.502466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.502704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.502795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.503025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.503083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.503302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.503360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.503554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.503611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.503832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.503890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.504111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.504169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.504390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.504448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.504682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.504768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.504972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.505029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.505211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.505269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.505459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.505516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.505703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.505983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.506040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.506260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.506317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.506527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.506584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.506782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.506841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.507065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.507123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.507371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.507429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.507594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.507650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.507871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.705 [2024-07-15 11:52:03.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.705 qpair failed and we were unable to recover it. 00:25:55.705 [2024-07-15 11:52:03.508124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.508181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.508377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.508434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.508637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.508905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.508963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.509160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.509216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.509439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.509495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.509690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.509759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.509994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.510050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.510268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.510325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.510556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.510617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.510858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.510924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.511123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.511180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.511406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.511463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.511686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.511765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.512012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.512069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.512287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.512343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.512559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.512621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.512872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.512933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.513157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.513214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.513441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.513503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.513768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.513827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.513994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.514050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.514271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.514328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.514563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.514620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.514825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.514883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.515114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.515171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.515394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.515451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.515644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.515700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.515945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.516002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.516223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.516280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.516501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.516558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.516791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.516849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.517046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.517103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.517274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.517331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.517534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.517595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.517830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.517887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.518089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.518146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.518336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.518394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.518589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.518645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.518851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.518909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.519131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.519188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.519461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.519517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.519708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.519777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.520011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.520067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.706 [2024-07-15 11:52:03.520291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.706 [2024-07-15 11:52:03.520348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.706 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.520574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.520630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.520832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.520890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.521122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.521179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.521373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.521430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.521647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.521704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.521940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.522010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.522233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.522290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.522511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.522567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.522769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.522827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.523023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.523080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.523307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.523364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.523574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.523635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.523954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.524208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.524269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.524481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.524542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.524772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.524835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.525068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.525130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.525366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.525428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.525660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.525721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.525997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.526060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.526303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.526365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.526582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.526643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.526887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.526950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.527152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.527214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.527452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.527513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.527721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.527793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.528026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.528087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.528297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.528359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.528589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.528651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.528900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.528963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.529171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.529232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.529473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.529535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.529756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.529820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.530029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.530090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.530292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.530354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.530588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.530649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.530897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.530960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.531194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.531256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.531496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.531557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.531789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.531853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.532084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.532145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.532347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.532409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.532639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.532702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.532944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.533007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.533187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.533249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.533478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.533550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.533769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.533832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.534068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.534130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.534363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.534424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.534623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.534685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.707 qpair failed and we were unable to recover it. 00:25:55.707 [2024-07-15 11:52:03.534932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.707 [2024-07-15 11:52:03.534995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.535229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.535291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.535493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.535554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.535796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.535860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.536061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.536122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.536336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.536398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.536607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.536669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.536917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.536980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.537212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.537273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.537489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.537551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.537781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.537844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.538082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.538144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.538350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.538411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.538623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.538684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.538904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.538966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.539204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.539265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.539469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.539531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.539771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.539836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.540043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.540309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.540371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.540550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.540612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.540842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.540905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.541133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.541196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.541426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.541488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.541726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.541802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.542032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.542329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.542391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.542631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.542693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.542916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.542978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.543210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.543271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.543476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.543538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.543766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.543829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.544061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.544123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.544361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.544423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.544660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.544722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.544989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.545061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.545293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.545354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.545589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.545651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.545904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.545966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.546196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.546257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.546463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.546525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.546736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.546811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.547040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.547101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.547333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.547395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.547638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.547699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.547958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.548020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.548258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.548320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.548560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.548621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.548850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.548913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.549156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.549218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.549424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.708 [2024-07-15 11:52:03.549486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.708 qpair failed and we were unable to recover it. 00:25:55.708 [2024-07-15 11:52:03.549728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.549801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.550042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.550104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.550332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.550627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.550689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.550937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.550999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.551212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.551274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.551488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.551550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.551784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.551846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.552083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.552144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.552363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.552425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.552626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.552688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.552963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.553027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.553273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.553334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.553546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.553607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.553847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.553910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.554151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.554214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.554419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.554480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.554711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.554787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.554999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.555061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.555266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.555539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.555602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.555836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.555900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.556152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.556213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.556445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.556507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.556761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.556835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.557069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.557131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.557364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.557426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.557663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.557724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.557984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.558046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.558257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.558319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.558554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.558615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.558818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.558882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.559114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.559176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.559380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.559442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.559644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.559706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.559960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.560022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.560236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.560298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.560529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.560590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.560811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.560874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.561107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.561168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.561400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.561462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.561670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.561732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.561980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.562042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.562271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.709 [2024-07-15 11:52:03.562333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.709 qpair failed and we were unable to recover it. 00:25:55.709 [2024-07-15 11:52:03.562543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.562604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.562809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.562872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.563112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.563174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.563410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.563471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.563701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.563776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.564000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.564061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.564280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.564340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.564583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.564645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.564866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.564929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.565133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.565194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.565403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.565464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.565671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.566002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.566064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.566295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.566355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.566560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.566621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.566870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.566933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.567139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.567200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.567433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.567494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.567699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.567774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.567982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.568044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.568285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.568356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.568560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.568620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.568837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.568901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.569111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.569172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.569383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.569443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.569684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.569758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.569937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.569999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.570282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.570344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.570545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.570606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.570839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.570901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.571128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.571190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.571419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.571481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.571684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.571757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.571993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.572055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.572299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.710 [2024-07-15 11:52:03.572361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.710 qpair failed and we were unable to recover it. 00:25:55.710 [2024-07-15 11:52:03.572597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.572658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.572912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.572975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.573207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.573268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.573500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.573562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.573788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.573850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.574056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.574120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.574316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.574378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.574611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.574673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.574930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.574993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.575234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.575295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.575512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.575574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.575805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.575868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.576108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.576171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.576390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.576452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.576691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.576765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.577003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.577064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.577299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.577361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.577544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.577605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.577809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.577872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.578104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.578166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.578404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.578465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.578680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.578754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.578992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.579054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.579269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.579543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.579605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.579813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.579890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.580103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.580165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.580399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.580461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.580666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.580727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.580984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.581045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.581284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.581345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.581582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.581643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.581904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.581967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.582176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.582238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.582469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.582753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.582816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.711 qpair failed and we were unable to recover it. 00:25:55.711 [2024-07-15 11:52:03.583022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.711 [2024-07-15 11:52:03.583084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.583311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.583373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.583556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.583616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.583860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.583923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.584140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.584202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.584412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.584473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.584707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.584784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.584968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.585237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.585298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.585530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.585590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.585793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.585855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.586064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.586126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.586362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.586423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.586658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.586718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.586980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.587041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.587249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.587311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.587554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.587615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.587860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.587922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.588165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.588227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.588465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.588526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.588767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.588830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.589068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.589130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.589361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.589423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.589666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.589727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.589950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.590011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.590245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.590514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.590576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.590805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.590867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.591099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.591161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.591394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.591456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.591680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.591754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.591995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.592057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.592291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.592352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.592585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.592646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.592892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.592954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.593165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.593226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.593456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.593518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.712 qpair failed and we were unable to recover it. 00:25:55.712 [2024-07-15 11:52:03.593735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.712 [2024-07-15 11:52:03.593832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.594040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.594102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.594311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.594373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.594597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.594658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.594907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.594971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.595210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.595272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.595522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.595583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.595798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.595862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.596094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.596156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.596361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.596423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.596633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.596915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.596976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.597193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.597255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.597464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.597526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.597765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.597846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.598086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.598147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.598377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.598439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.598673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.598734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.599036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.599327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.599397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.599634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.599695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.599952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.600015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.600247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.600308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.600510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.600572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.600819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.600883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.601088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.601148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.601353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.601414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.601616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.601678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.601912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.601975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.602180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.602242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.602472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.602533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.602773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.602836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.603065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.603127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.603355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.603416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.603628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.603690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.603940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.604003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.604237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.604298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.604530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.604591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.604875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.604940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.713 [2024-07-15 11:52:03.605143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.713 [2024-07-15 11:52:03.605205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.713 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.605408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.605469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.605696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.605769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.605977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.606038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.606240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.606301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.606583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.606645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.606856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.606919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.607140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.607202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.607409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.607469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.607680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.607754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.608006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.608068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.608299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.608359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.608542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.608603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.608809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.608871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.609108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.609170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.609402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.609464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.609673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.609733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.610020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.610080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.610319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.610380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.610619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.610680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.610900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.610971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.611191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.611252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.611501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.611563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.611794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.611856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.612056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.612117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.612317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.612379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.612581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.612641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.612863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.612925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.613166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.613226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.613477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.613538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.714 [2024-07-15 11:52:03.613773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.714 [2024-07-15 11:52:03.613836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.714 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.614043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.614103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.614333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.614394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.614627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.614688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.614952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.615014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.615216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.615278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.615563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.615625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.615825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.615888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.616122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.616183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.616412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.616711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.616805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.617013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.617073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.617306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.617367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.617610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.617671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.617917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.617979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.618218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.618279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.618509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.618569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.618816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.618878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.619118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.619179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.619384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.619444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.619621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.619682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.619946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.620008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.620248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.620308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.620540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.620601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.620808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.620870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.621105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.621166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.621397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.621457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.621696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.621773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.621996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.622057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.622352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.622535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.622605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.622840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.622903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.623133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.623195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.623428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.623489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.623700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.623774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.623952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.624014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.624215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.624275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.624451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.624513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.624714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.624807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.625049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.625110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.625327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.715 [2024-07-15 11:52:03.625389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.715 qpair failed and we were unable to recover it. 00:25:55.715 [2024-07-15 11:52:03.625622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.625683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.625931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.625992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.626202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.626264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.626489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.626766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.626828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.627032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.627094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.627297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.627359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.627563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.627624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.627908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.627972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.628208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.628269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.628475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.628768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.628831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.629069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.629130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.629340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.629402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.629606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.629668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.629887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.629949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.630196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.630258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.630490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.630552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.630783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.630845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.631086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.631147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.631378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.631439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.631647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.631708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.631956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.632017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.632227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.632288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.632523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.632586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.632823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.632886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.633087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.633148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.633375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.633437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.633676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.633753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.633997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.634068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.634281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.634343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.634572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.634634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.634877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.634940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.635181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.635242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.635425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.635486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.635688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.716 [2024-07-15 11:52:03.635762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.716 qpair failed and we were unable to recover it. 00:25:55.716 [2024-07-15 11:52:03.635972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.636033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.636242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.636302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.636530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.636592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.636798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.636861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.637095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.637156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.637368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.637430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.637646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.637707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.637978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.638039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.638270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.638332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.638562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.638623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.717 [2024-07-15 11:52:03.638856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.717 [2024-07-15 11:52:03.638920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.717 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.639121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.639183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.639400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.639462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.639636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.639697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.639948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.640010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.640239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.640300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.640529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.640591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.640804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.640868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.641071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.641134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.641336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.641398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.641645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.641706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.641936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.641997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.642225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.994 [2024-07-15 11:52:03.642288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.994 qpair failed and we were unable to recover it. 00:25:55.994 [2024-07-15 11:52:03.642494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.642556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.642787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.642849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.643078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.643379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.643440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.643641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.643702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.644022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.644262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.644323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.644525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.644587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.644793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.644856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.645100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.645161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.645342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.645413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.645621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.645683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.645904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.645965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.646209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.646270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.646509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.646571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.646802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.646866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.647096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.647158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.647439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.647501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.647733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.647807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.648037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.648098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.648331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.648392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.648625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.648687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.648924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.648986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.649192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.649253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.649498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.649559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.649792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.650063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.650124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.650365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.650426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.650634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.650695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.650941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.651003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.651191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.651253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.651498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.651559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.651791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.651854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.652099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.652161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.652442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.652503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.995 [2024-07-15 11:52:03.652749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.995 [2024-07-15 11:52:03.652813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.995 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.653023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.653084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.653327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.653389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.653645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.653706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.653904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.653966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.654159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.654221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.654453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.654514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.654720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.654796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.655004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.655065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.655300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.655361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.655591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.655652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.655908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.655970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.656212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.656274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.656505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.656566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.656804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.656868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.657049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.657120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.657296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.657357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.657579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.657639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.657884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.657947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.658145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.658207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.658434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.658495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.658730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.658803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.659011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.659073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.659303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.659363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.659576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.659637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.659834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.659897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.660129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.660190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.660396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.660457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.660688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.660760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.661010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.661071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.661279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.661341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.661543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.661604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.661837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.661900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.662083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.662144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.662343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.662404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.662610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.662671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.996 [2024-07-15 11:52:03.662912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.996 [2024-07-15 11:52:03.662974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.996 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.663216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.663277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.663493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.663554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.663787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.663850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.664053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.664115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.664398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.664460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.664700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.664799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.665047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.665108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.665351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.665412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.665644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.665705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.665965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.666026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.666261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.666322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.666562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.666624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.666908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.666971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.667177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.667238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.667472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.667534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.667763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.667825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.668038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.668100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.668382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.668444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.668644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.668715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.668944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.669006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.669243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.669303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.669535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.669596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.669804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.669866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.670151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.670212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.670439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.670500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.670733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.670807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.671048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.671108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.671348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.671410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.671648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.671709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.671957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.672019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.672266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.672326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.672520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.672581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.672831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.672894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.673097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.673159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.673389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.673450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.673693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.673767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.997 [2024-07-15 11:52:03.674008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.997 [2024-07-15 11:52:03.674070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.997 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.674301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.674596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.674657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.674907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.674970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.675180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.675240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.675485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.675546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.675769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.675832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.676041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.676102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.676342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.676403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.676624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.676686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.676954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.677017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.677250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.677312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.677547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.677609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.677812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.677876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.678107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.678170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.678405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.678466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3132657 Killed "${NVMF_APP[@]}" "$@" 00:25:55.998 [2024-07-15 11:52:03.678705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.678782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.678992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.679054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:55.998 [2024-07-15 11:52:03.679290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:55.998 [2024-07-15 11:52:03.679352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.998 [2024-07-15 11:52:03.679585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.998 [2024-07-15 11:52:03.679648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.998 [2024-07-15 11:52:03.679912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.679977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.681240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.681272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.681436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.681464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.681572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.681599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.681733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.681767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.681897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.681924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.682054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.682081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.682241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.682268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.682401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.682562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.682589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.683058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.683088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.683252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.683280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.683442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.683469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.683595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.683627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.998 [2024-07-15 11:52:03.683775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.998 [2024-07-15 11:52:03.683802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.998 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.683940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.683967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.684074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3133212 00:25:55.999 [2024-07-15 11:52:03.684259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:55.999 [2024-07-15 11:52:03.684287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3133212 00:25:55.999 [2024-07-15 11:52:03.684429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.684457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.684562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.684590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3133212 ']' 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.684729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.999 [2024-07-15 11:52:03.684763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.999 [2024-07-15 11:52:03.684923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.684951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.999 [2024-07-15 11:52:03.685110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.999 [2024-07-15 11:52:03.685138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 11:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.999 [2024-07-15 11:52:03.685281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.685308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.685449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.685476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.685609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.685636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.685770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.685798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.685920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.685978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.686152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.686338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.686480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.686717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.686883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.686994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.687152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.687298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.687492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.687623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:55.999 [2024-07-15 11:52:03.687783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.999 [2024-07-15 11:52:03.687810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:55.999 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.687946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.687973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.688095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.688121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.688283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.688311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.688442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.688469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.688595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.688621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.688761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.688789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.689845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.689971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.690939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.690964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.691950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.691975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.692105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.692144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.692299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.692337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.692487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.692526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.692631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.692656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.692789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.692828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.693904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.000 [2024-07-15 11:52:03.693928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.000 qpair failed and we were unable to recover it. 00:25:56.000 [2024-07-15 11:52:03.694092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.694878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.694987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.695941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.695966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.696915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.696941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.697894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.697919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.698849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.698874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.001 qpair failed and we were unable to recover it. 00:25:56.001 [2024-07-15 11:52:03.699880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.001 [2024-07-15 11:52:03.699905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.700858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.700884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.701953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.701978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.702920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.702945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.703865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.703890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.704867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.704892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.705016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.705040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.705166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.705210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.705316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.705342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.705459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.002 [2024-07-15 11:52:03.705484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.002 qpair failed and we were unable to recover it. 00:25:56.002 [2024-07-15 11:52:03.705607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.705631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.705778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.705804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.705916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.705941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.706860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.706885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.707942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.707967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.708899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.708924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.709894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.709990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.710882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.710907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.003 qpair failed and we were unable to recover it. 00:25:56.003 [2024-07-15 11:52:03.711001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.003 [2024-07-15 11:52:03.711026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.711907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.711932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.712924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.712948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.713895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.713920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.714871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.714896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.715027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.715179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.715326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.715452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.004 [2024-07-15 11:52:03.715639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.004 qpair failed and we were unable to recover it. 00:25:56.004 [2024-07-15 11:52:03.715777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.715817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.715948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.715973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.716951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.716980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.717876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.717901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.718867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.718995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.719893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.719919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.720892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.720991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.721015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.721116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.005 qpair failed and we were unable to recover it. 00:25:56.005 [2024-07-15 11:52:03.721268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.005 [2024-07-15 11:52:03.721293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.721426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.721451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.721556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.721581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.721669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.721693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.721849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.721993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.722860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.722888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.723917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.723956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.724887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.724911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.725923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.725948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.726067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.726238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.726389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.726554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.006 [2024-07-15 11:52:03.726702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.006 qpair failed and we were unable to recover it. 00:25:56.006 [2024-07-15 11:52:03.726831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.726856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.727973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.727997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.728966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.728998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.729877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.729903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.730893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.730918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.731933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.731958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.732085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.007 [2024-07-15 11:52:03.732125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.007 qpair failed and we were unable to recover it. 00:25:56.007 [2024-07-15 11:52:03.732295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.732334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.732460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.732484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.732594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.732618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.732759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.732785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.732886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.732911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733653] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:56.008 [2024-07-15 11:52:03.733708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b9[2024-07-15 11:52:03.733731] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:0 with addr=10.0.0.2, port=4420 00:25:56.008 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.733862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.733886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.734921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.734946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.735912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.735937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.736883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.736909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.737002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.737027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.737127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.737167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.008 qpair failed and we were unable to recover it. 00:25:56.008 [2024-07-15 11:52:03.737292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.008 [2024-07-15 11:52:03.737317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.737436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.737460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.737570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.737595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.737710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.737755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.737856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.737880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.738957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.738982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.739898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.739923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.740939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.740964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.009 [2024-07-15 11:52:03.741889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.009 [2024-07-15 11:52:03.741915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.009 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.742934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.742959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.743869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.743894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.744931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.744957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.745911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.745935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.746894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.746933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.747037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.747062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.010 [2024-07-15 11:52:03.747199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.010 [2024-07-15 11:52:03.747223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.010 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.747319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.747343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.747456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.747481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.747629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.747653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.747796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.747821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.747944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.747969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.748874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.748998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.749889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.749914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.750873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.750898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.751899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.751991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.011 qpair failed and we were unable to recover it. 00:25:56.011 [2024-07-15 11:52:03.752921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.011 [2024-07-15 11:52:03.752948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.753926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.753951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.754926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.754950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.755890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.755914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.756882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.756906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.757872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.757990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.758149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.758335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.758528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.758702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.758847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.758871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.012 [2024-07-15 11:52:03.759019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.012 [2024-07-15 11:52:03.759044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.012 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.759162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.759186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.759358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.759397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.759513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.759541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.759674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.759698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.759842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.759882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.760860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.760885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.761895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.761920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.762875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.762898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.763929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.763955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.013 qpair failed and we were unable to recover it. 00:25:56.013 [2024-07-15 11:52:03.764046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.013 [2024-07-15 11:52:03.764070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.764232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.764423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.764527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.764692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.764879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.764989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.765920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.765950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.766966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.766991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.767162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.767186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.767359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.767383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.767598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.767622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.767762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.767801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.767932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.767957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.768150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.768197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.768525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.768548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.768700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.768723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.768899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.768924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.769108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.769259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.769416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.769593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.769837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.769996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.770019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.014 [2024-07-15 11:52:03.770151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.014 [2024-07-15 11:52:03.770182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.014 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.770319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.770343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.770513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.770551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.770712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.770755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.770906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.771912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.771937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.772064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.772325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.772349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.772500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.772523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.772784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.772823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.772943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.772967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.773095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.773122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.773287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.773310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.773442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.773466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.773683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.773706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.773856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.773880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.774010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.774210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.774338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.015 [2024-07-15 11:52:03.774494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.774665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.774928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.774952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.775121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.775271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.775428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.775576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.775759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.775977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.776001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.776199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.776223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.776359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.776382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.776517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.776540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.776673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.015 [2024-07-15 11:52:03.776697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.015 qpair failed and we were unable to recover it. 00:25:56.015 [2024-07-15 11:52:03.776826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.776851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.776965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.776988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.777128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.777302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.777498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.777644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.777840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.777998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.778862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.778992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.779928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.779953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.780140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.780164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.780309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.780333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.780497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.780536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.780707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.780751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.780871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.780896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.781905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.781930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.782920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.016 [2024-07-15 11:52:03.782944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.016 qpair failed and we were unable to recover it. 00:25:56.016 [2024-07-15 11:52:03.783107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.783273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.783428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.783607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.783782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.783955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.783980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.784141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.784301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.784503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.784861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.784986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.785160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.785371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.785535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.785736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.785887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.785912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.786954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.786979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.787924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.787949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.788896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.788921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.789049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.789077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.789203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.789243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.017 [2024-07-15 11:52:03.789345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.017 [2024-07-15 11:52:03.789370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.017 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.789461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.789486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.789604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.789719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.789749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.789898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.789922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.790848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.790997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.791861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.791977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.792940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.792964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.793081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.793106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.793254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.793297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.793449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.793472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.793651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.793675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.793804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.793843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.794002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.794027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.794149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.018 [2024-07-15 11:52:03.794174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.018 qpair failed and we were unable to recover it. 00:25:56.018 [2024-07-15 11:52:03.794315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.794339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.794471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.794496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.794597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.794622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.794710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.794734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.794835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.794859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.795962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.795987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.796948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.796972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.797123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.797162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.797322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.797345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.797522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.797545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.797703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.797727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.797866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.797891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.798905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.798929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.799061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.799086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.799271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.799295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.799447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.799472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.799652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.799676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.799838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.799867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.800109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.019 [2024-07-15 11:52:03.800133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.019 qpair failed and we were unable to recover it. 00:25:56.019 [2024-07-15 11:52:03.800284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.800307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.800488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.800511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.800675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.800698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.800874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.801040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.801226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.801466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.801673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.801840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.801982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.802169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.802313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.802507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.802721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.802914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.802938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.803161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.803194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.803347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.803370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.803549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.803573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.803751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.803791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.803938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.803962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.804901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.804926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.805119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.805143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.805281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.805305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.805561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.805587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.805764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.805803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.805943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.805967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.806121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.806334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.806538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.806702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.806859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.806998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.020 [2024-07-15 11:52:03.807022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.020 qpair failed and we were unable to recover it. 00:25:56.020 [2024-07-15 11:52:03.807124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.807148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.807302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.807331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.807555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.807582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.807692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.807717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.807897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.807922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.808012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.808037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.808166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.808190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.808386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.808410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.808557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.808581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.808765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.808790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.809047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.809209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.809410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.809607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.809874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.809983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.810148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.810367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.810523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.810767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.810946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.810970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.811141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.811165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.811284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.811308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.811530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.811559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.811764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.811788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.811939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.811963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.812043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.021 [2024-07-15 11:52:03.812195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.812219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.812371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.812394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.812548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.812586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.812774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.812798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.812941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.812965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.813099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.813123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.813267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.813307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.813491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.813531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.813706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.813730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.813987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.814013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.814240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.021 [2024-07-15 11:52:03.814263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.021 qpair failed and we were unable to recover it. 00:25:56.021 [2024-07-15 11:52:03.814444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.814467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.814657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.814681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.814876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.814901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.815064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.815108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.815294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.815329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.815474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.815497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.815686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.815710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.815882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.815907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.816065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.816088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.816269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.816292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.816479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.816503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.816620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.816659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.816833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.816859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.817020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.817058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.817227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.817251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.817384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.817408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.817637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.817661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.817842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.817871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.818080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.818269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.818491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.818700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.818879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.818994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.819018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.819204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.819227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.819376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.819400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.819589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.819613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.819831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.819857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.819998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.820022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.820221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.820245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.820416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.820440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.820669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.820693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.820907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.820935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.821102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.821126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.821340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.821364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.821557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.821583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.022 [2024-07-15 11:52:03.821727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.022 [2024-07-15 11:52:03.821755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.022 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.821950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.821975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.822117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.822142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.822309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.822332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.822548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.822572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.822723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.822768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.822888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.822913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.823129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.823153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.823351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.823378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.823532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.823567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.823773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.823798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.823907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.823932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.824138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.824162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.824323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.824353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.824521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.824545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.824799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.824824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.825047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.825077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.825236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.825260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.825512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.825539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.825704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.825728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.825943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.825968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.826110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.826138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.826263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.826288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.826498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.826523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.826728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.826782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.826910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.826935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.827132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.827156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.827285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.827309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.827469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.827508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.827782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.827807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.827950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.827976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.828141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.023 [2024-07-15 11:52:03.828177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.023 qpair failed and we were unable to recover it. 00:25:56.023 [2024-07-15 11:52:03.828397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.828422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.828606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.828630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.828860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.828895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.829045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.829069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.829251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.829285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.829482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.829505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.829666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.829690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.829862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.829888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.830117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.830142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.830299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.830324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.830483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.830507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.830686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.830715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.830950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.830975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.831869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.831894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.832111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.832161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.832370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.832395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.832564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.832587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.832813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.832845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.832977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.833200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.833365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.833507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.833708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.833912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.833937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.834073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.834116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.834292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.834316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.834503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.834526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.834748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.834774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.834935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.834969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.835194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.835224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.835403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.835427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.835624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.835648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.024 [2024-07-15 11:52:03.835836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.024 [2024-07-15 11:52:03.835861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.024 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.836034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.836058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.836274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.836298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.836442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.836466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.836661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.836685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.836833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.836859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.837964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.837988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.838135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.838257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.838429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.838659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.838830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.838987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.839011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.839151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.839190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.839446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.839469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.839692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.839731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.839862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.839886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.840142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.840165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.840375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.840398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.840529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.840553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.840695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.840947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.841143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.841166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.841333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.841356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.841553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.841577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.841736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.841780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.841962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.842002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.842218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.842243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.842422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.842445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.842667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.842874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.842899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.025 qpair failed and we were unable to recover it. 00:25:56.025 [2024-07-15 11:52:03.843059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.025 [2024-07-15 11:52:03.843083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.843223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.843252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.843437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.843654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.843677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.843831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.843856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.844025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.844050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.844330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.844601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.844625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.844777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.844802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.844989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.845013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.845229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.845252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.845401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.845424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.845643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.845667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.845874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.845899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.846039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.846317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.846486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.846683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.846867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.846978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.847180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.847311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.847525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.847725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.847972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.847996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.848179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.848203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.848375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.848399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.848511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.848554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.848764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.848789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.848986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.849011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.849174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.849197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.849462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.849693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.849717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.849896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.849929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.850074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.850098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.850465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.026 [2024-07-15 11:52:03.850492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.026 qpair failed and we were unable to recover it. 00:25:56.026 [2024-07-15 11:52:03.850642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.850666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.850811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.850837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.850943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.850968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.851106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.851130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.851305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.851329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.851500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.851532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.851767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.851792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.851922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.851947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.852168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.852202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.852387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.852411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.852594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.852618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.852779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.852805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.853000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.853024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.853279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.853314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.853457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.853481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.853692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.853723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.853911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.853936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.854126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.854290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.854501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.854743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.854901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.854997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.855113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.855364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.855566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.855796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.855950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.855975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.856149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.856172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.856357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.856380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.856502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.856541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.856749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.856775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.856936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.856961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.857147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.857301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.857325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.857465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.857489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.857699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-07-15 11:52:03.857725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.027 qpair failed and we were unable to recover it. 00:25:56.027 [2024-07-15 11:52:03.857923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.857947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.858103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.858127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.858249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.858300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.858406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.858431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.858653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.858692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.858869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.858894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.859891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.859915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.860108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.860288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.860468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.860671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.860854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.860984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.861008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.861182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.861206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.861373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.861412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.861636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.861660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.861846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.861872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.862000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.862025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.862194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.862218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.862389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.862414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.862632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.862656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.862809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.862840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.863078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.863104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.863311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.863335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.863484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.863508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.863780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.863806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.028 [2024-07-15 11:52:03.863961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.028 [2024-07-15 11:52:03.863995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.028 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.864145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.864169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.864321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.864344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.864561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.864584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.864843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.864868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.865062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.865086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.865241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.865264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.865410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.865452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.865680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.865704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.865903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.865929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.866089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.866306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.866513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.866730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.866872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.866994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.867961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.867985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.868936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.868961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.869932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.869957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.870083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.870107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.029 [2024-07-15 11:52:03.870229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.029 [2024-07-15 11:52:03.870268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.029 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.870399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.870423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.870606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.870646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.870788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.870813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.870947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.870971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.871955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.871980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.872143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.872168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.872326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.872350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.872525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.872550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.872713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.872742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.872904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.873879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.873976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.874231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.874401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.874638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.874801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.874922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.874946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.875068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.875092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.875222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.875263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.875506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.875529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.875697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.875722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.875854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.875879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.876027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.876051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.876143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.876168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.876288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.030 [2024-07-15 11:52:03.876313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.030 qpair failed and we were unable to recover it. 00:25:56.030 [2024-07-15 11:52:03.876409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.876434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.876585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.876610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.876704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.876728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.876833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.876857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.876948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.876972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.877956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.877980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.878078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.878103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.878272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.878296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.878482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.878506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.878728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.878773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.878911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.878935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.879033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.879057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.879206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.879407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.879431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.879585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.879624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.879777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.879803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.880876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.880978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.881951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.881977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.882142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.882181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.882318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.031 [2024-07-15 11:52:03.882342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.031 qpair failed and we were unable to recover it. 00:25:56.031 [2024-07-15 11:52:03.882527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.882566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.882725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.882768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.882897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.883886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.883911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.884917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.884942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.885860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.885885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.886894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.886919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.887053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.887078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.032 [2024-07-15 11:52:03.887245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.032 [2024-07-15 11:52:03.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.032 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.887429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.887454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.887682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.887712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.887877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.887902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.888026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.888065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.888264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.888289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.888466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.888490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.888721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.888752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.888909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.888934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.889138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.889175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.889334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.889358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.889479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.889503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.889686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.889725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.889872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.889897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.890924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.890949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.891076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.891113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.891288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.891311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.891491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.891531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.891697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.891725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.891889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.891913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.892966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.892991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.893191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.893382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.893406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.893577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.893601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.893769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.893794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.893916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.893941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.033 [2024-07-15 11:52:03.894059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.033 [2024-07-15 11:52:03.894084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.033 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.894231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.894269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.894414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.894598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.894636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.894835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.894861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.894981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.895006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.895227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.895260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.895416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.895440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.895580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.895619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.895771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.895810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.895980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.896181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.896364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.896592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.896806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.896957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.896995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.897117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.897155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.897360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.897391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.897562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.897586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.897782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.897947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.897971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.898109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.898134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.898335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.898548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.898575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.898732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.898778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.898893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.898917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.899922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.899947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.900044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.900069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.900190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.900214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.900415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.900438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.900557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.900581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.034 qpair failed and we were unable to recover it. 00:25:56.034 [2024-07-15 11:52:03.900733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.034 [2024-07-15 11:52:03.900763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.900887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.900912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.901888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.901913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.902058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.902083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.902193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.902232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.902354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.902378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.902520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.902545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.902795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.902820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.903934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.903958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.904101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.904139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.904335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.904359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.904560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.904584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.904726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.904761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.904881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.904906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.905099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.905283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.905502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.905668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.905848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.905986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.906892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.906999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.907024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.907170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.035 [2024-07-15 11:52:03.907209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.035 qpair failed and we were unable to recover it. 00:25:56.035 [2024-07-15 11:52:03.907306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.907330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.907547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.907593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.907769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.907809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.907939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.907963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.908135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.908175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.908373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.908396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.908566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.908590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.908779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.908804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.908964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.908989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.909097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.909137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.909306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.909346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.909547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.909574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.909753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.909810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.909950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.909974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.910163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.910195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.910369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.910393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.910602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.910626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.910782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.910830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.910964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.910989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.911141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.911180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.911355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.911379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.911527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.911566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.911710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.911753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.911868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.911892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.912876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.912997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.913903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.913927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.914020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.914044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.914181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.914206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.914357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.914396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.036 [2024-07-15 11:52:03.914577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.036 [2024-07-15 11:52:03.914617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.036 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.914800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.914841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.914934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.914959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.915084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.915109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.915233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.915257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.915462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.915485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.915659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.915683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.915860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.915886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.916055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.916079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.916293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.916317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.916491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.916515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.916734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.916790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.916915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.916939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.917954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.917977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.918139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.918163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.918294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.918333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.918467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.918491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.918728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.918785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.918911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.918936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.919113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.919138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.919258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.919282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.919453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.919492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.919645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.919674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.919849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.919874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.920926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.037 [2024-07-15 11:52:03.920951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.037 qpair failed and we were unable to recover it. 00:25:56.037 [2024-07-15 11:52:03.921131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.921154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.921349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.921373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.921572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.921596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.921754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.921794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.921919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.921959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.922141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.922171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.922331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.922355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.922532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.922556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.922715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.922743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.922917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.922942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.923148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.923324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.923348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.923580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.923605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.923753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.923792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.923939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.923964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.924125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.924297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.924497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.924673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.924867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.924990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.925015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.925143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.925182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.925446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.925476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.925618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.925642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.925835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.925860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.925993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.926826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.926983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.927131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.927311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.927495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.927717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.927937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.927962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.928129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.928153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.928301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.928325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.038 [2024-07-15 11:52:03.928556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.038 [2024-07-15 11:52:03.928580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.038 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.928769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.928795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.929014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.929062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.929303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.929336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.929499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.929523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.929690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.929714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.929935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.929969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.930103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.930127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.930339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.930363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.930518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.930542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.930726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.930757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.931020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.931045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.931214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.931237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.931444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.931467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.931623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.931647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.931857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.931882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.932023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.932070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.932219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.932243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.932440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.932464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.932606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.932630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.932862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.932888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.933024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.933048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.933253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.933285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.933470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.933494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.933685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.933882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.933906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.934098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.934121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.934325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.934357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.934516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.934540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.934717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.934935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.934959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.935138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.935161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.935333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.935356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.935569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.935593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.935751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.935792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.935957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.935986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.936211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.936240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.936385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.936410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.936593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.936617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.936735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.039 [2024-07-15 11:52:03.936786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.039 qpair failed and we were unable to recover it. 00:25:56.039 [2024-07-15 11:52:03.936962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.936987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.937885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.937910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.938911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.938936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.939059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.939084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.939240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.939264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.939381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.939420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.939669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.939693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.939928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.939953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.940863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.940888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.941979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.942924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.942949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.040 [2024-07-15 11:52:03.943068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.040 [2024-07-15 11:52:03.943092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.040 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.943929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.943954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.944953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.944977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.945897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.945921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.946948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.946973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.041 [2024-07-15 11:52:03.947785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.041 [2024-07-15 11:52:03.947810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.041 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.947930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.947955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.948957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.948982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.949855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.949978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.950891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.950918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.951069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.951216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.042 [2024-07-15 11:52:03.951336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.951358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.042 [2024-07-15 11:52:03.951363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.042 [2024-07-15 11:52:03.951372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.042 [2024-07-15 11:52:03.951395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.042 [2024-07-15 11:52:03.951471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.951495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:56.042 [2024-07-15 11:52:03.951529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:56.042 [2024-07-15 11:52:03.951581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.042 [2024-07-15 11:52:03.951605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.042 qpair failed and we were unable to recover it. 00:25:56.042 [2024-07-15 11:52:03.951579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:56.043 [2024-07-15 11:52:03.951581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:56.043 [2024-07-15 11:52:03.951698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.951722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.951825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.951849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.951995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.952899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.952924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.953944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.953970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.954927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.954953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.955851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.955978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.956935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.956962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.957114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.957140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.957341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.957366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.957462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.043 [2024-07-15 11:52:03.957487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.043 qpair failed and we were unable to recover it. 00:25:56.043 [2024-07-15 11:52:03.957591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.957617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.957786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.957824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.957956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.957982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.958896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.958923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.959899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.959924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.960824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.960864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.044 [2024-07-15 11:52:03.961956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.044 [2024-07-15 11:52:03.961982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.044 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.962875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.962989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.963916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.963941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.964913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.964940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.965936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.965960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.966909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.966936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.967065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.967091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.967195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.967222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.967359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.314 [2024-07-15 11:52:03.967520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-07-15 11:52:03.967546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.314 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.967667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.967692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.967826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.967853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.967947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.967972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.968844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.969913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.969941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.970870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.971871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.971899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.972919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.972944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.973879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.973987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.974846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.975005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.975030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.975150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.975175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.975297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.975322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.315 [2024-07-15 11:52:03.975455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-07-15 11:52:03.975480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.315 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.975587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.975613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.975769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.975808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.975919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.975946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.976879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.976985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.977906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.977932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.978837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.978974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.979895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.979921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.980947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.980974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.981853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.981983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.982868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.982892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.983012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.983037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.983131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.983156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.316 [2024-07-15 11:52:03.983281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-07-15 11:52:03.983306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.316 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.983455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.983480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.983589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.983618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.983767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.983918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.983957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.984930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.985872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.985911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.986882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.986994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.987866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.988972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.988997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.989113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.989138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.989285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.989310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.989409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.989435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.989603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.989642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.317 qpair failed and we were unable to recover it. 00:25:56.317 [2024-07-15 11:52:03.989758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-07-15 11:52:03.989792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.989925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.989951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.990883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.990981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.991158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.991330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.991483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.991652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.991845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.991873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.992956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.992983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.993903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.993943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.994884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.994923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.995971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.995997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.996917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.996943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.318 [2024-07-15 11:52:03.997843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-07-15 11:52:03.997882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.318 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.998933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.998971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:03.999891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:03.999989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.000876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.000901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.001877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.001981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.002865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.002890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.003897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.003921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.004938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.004965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.319 [2024-07-15 11:52:04.005090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-07-15 11:52:04.005115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.319 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.005914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.005945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.006881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.006907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.007896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.008963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.008988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.009908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.009935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a4c000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.010936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.010962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.011961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.011986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.320 [2024-07-15 11:52:04.012846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-07-15 11:52:04.012871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.320 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.013941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.013967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.014900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.014923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.015893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.015932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.016878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.016916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a54000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.017959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.017984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fea0 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.018890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.018915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.321 qpair failed and we were unable to recover it. 00:25:56.321 [2024-07-15 11:52:04.019954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-07-15 11:52:04.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.020889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.020914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.021944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.021969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.022850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.022875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.023899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.023924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.024893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.024993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.025969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.025998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.026966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.026991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.322 qpair failed and we were unable to recover it. 00:25:56.322 [2024-07-15 11:52:04.027114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-07-15 11:52:04.027139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.027939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.027965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.028925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.028950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.029844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.029993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.030890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.031856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.031882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.032957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.032983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.033967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.033992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.034145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.034170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.034295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.034320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.034448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.034569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-07-15 11:52:04.034594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.323 qpair failed and we were unable to recover it. 00:25:56.323 [2024-07-15 11:52:04.034719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.034748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.034834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.034859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.034957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.034982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.035944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.036887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.036912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.037960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.037985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.038871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.038896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.039873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.039899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.040876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.040901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.041000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.041025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.324 [2024-07-15 11:52:04.041175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-07-15 11:52:04.041200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.324 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.041321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.041346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.041466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.041491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.041624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.041649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.041767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.041793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.041942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.041967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.042866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.042891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.043901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.044847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.044873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.045893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.045918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.046899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.046924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.047840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.047866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.048894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.048919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.049125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-07-15 11:52:04.049150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.325 qpair failed and we were unable to recover it. 00:25:56.325 [2024-07-15 11:52:04.049299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.049325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.049495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.049520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.049660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.049685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.049846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.049873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.050933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.050959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.051966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.051991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.052159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.052184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.052352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.052377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.052537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.052563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.052718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.052748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.052929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.052954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.053131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.053361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.053565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.053754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.053894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.053999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.054966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.054992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.055098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.055123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.055251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.055276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.055463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.055488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.055683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.055708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.055866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.055892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.056891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.056916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.057852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.057985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.058010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.058142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.058167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.058255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-07-15 11:52:04.058280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.326 qpair failed and we were unable to recover it. 00:25:56.326 [2024-07-15 11:52:04.058411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.058436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.058624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.058660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.058792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.058818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.058945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.058970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.059099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.059124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.059275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.059301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.059531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.059556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.059655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.059681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.059847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.059873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.060040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.060065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.060238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.060263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.060470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.060670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.060695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.060829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.060855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.061957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.061982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.062809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.062835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.063092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.063256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.063454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.063638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.063878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.063993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.064152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.064334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.064538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.064764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.064968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.064993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.065197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.065341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.065601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.065787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.065989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.066021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.066153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-07-15 11:52:04.066178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.327 qpair failed and we were unable to recover it. 00:25:56.327 [2024-07-15 11:52:04.066313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.066338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.066518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.066544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.066700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.066726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.066862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.066888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.067886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.067910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.068843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.068868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.069926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.069951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.070105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.070274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.070423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.328 [2024-07-15 11:52:04.070572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:56.328 [2024-07-15 11:52:04.070734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.328 [2024-07-15 11:52:04.070925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.070951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.328 [2024-07-15 11:52:04.071089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.328 [2024-07-15 11:52:04.071224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.071250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.071394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.071552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.071879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.071977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.072915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.072940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.073970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.073995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.074114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.074139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.074226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.074251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.328 [2024-07-15 11:52:04.074377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-07-15 11:52:04.074410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.328 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.074538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.074563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.074667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.074693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.074818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.074844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.074972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.074998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.075936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.075962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.076963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.076989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.077925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.077950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.078904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.078929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.079908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.079934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.080971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.080996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.081885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.329 qpair failed and we were unable to recover it. 00:25:56.329 [2024-07-15 11:52:04.081988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-07-15 11:52:04.082014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.082265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.082423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.082608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.082734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.082871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.082997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.083119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.083339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.083566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.083749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.083871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.083897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.084893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.084919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.085918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.085944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.086972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.086997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.087872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.087897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.088002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.088211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.330 [2024-07-15 11:52:04.088389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:56.330 [2024-07-15 11:52:04.088587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.330 [2024-07-15 11:52:04.088744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.330 [2024-07-15 11:52:04.088877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.088902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.330 qpair failed and we were unable to recover it. 00:25:56.330 [2024-07-15 11:52:04.089928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-07-15 11:52:04.089953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.090893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.090982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.091896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.091921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.092891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.092985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.093897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.093987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.094925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.094950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.095874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.095899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.096861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.096985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.097955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.097980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.098172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.098197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.098384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-07-15 11:52:04.098413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.331 qpair failed and we were unable to recover it. 00:25:56.331 [2024-07-15 11:52:04.098561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.098586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.098722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.098752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.098882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.098907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.099911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.099936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.100968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.100993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.101910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.101936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.102898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.102923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.103869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.103986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.104838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.104991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.105894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.105919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.106888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.106914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.107917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.107943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.332 [2024-07-15 11:52:04.108048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-07-15 11:52:04.108073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.332 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.108914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.108939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.109907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.109932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.110082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.110108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.110290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.110315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.110496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.110521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.110722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.110752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.110885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.110911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.111898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.111923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.112089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.112114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.112332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 Malloc0 00:25:56.333 [2024-07-15 11:52:04.112358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.112543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.112568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.333 [2024-07-15 11:52:04.112800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.112826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:56.333 [2024-07-15 11:52:04.112977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.333 [2024-07-15 11:52:04.113102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.333 [2024-07-15 11:52:04.113356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.113511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.113663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.113834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.113859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.114895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.114920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.115933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.115958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.333 [2024-07-15 11:52:04.116112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.116947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.116972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.117167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.117192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.117334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.117368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.117506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.117531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.117633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-07-15 11:52:04.117658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.333 qpair failed and we were unable to recover it. 00:25:56.333 [2024-07-15 11:52:04.117820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.117847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.117972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.117997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.118173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.118323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.118445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.118607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.118981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.119944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.119969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.120079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.120105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.120258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.120284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.120519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.120553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.120712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.120743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.120910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.120935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.121820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.121850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.122827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.122853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.123914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.123939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.124098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.124123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.334 [2024-07-15 11:52:04.124289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.124315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.334 [2024-07-15 11:52:04.124474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.124499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.334 [2024-07-15 11:52:04.124615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.334 [2024-07-15 11:52:04.124641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.124747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.124773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.124899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.124924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.125878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.125903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.126036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.126061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.126222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.126247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.126420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.126445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.126647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.126672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.126807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.126832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.127049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.127075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.127231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.127256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.127437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.127462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.127623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.127648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.127845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.127871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.128012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.128037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.128231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-07-15 11:52:04.128257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.334 qpair failed and we were unable to recover it. 00:25:56.334 [2024-07-15 11:52:04.128394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.128419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.128583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.128608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.128735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.128768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.128884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.128909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.129896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.129921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.130082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.130108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.130295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.130321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.130507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.130532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.130689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.130714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.130896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.130922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.131934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.131959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.132089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.132114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.335 [2024-07-15 11:52:04.132320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.132350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.335 [2024-07-15 11:52:04.132503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.132528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.335 [2024-07-15 11:52:04.132722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.132767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.132906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.132935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.133908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.133934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.134839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.134865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.135819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.135845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.136841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.136866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.137883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.137908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.335 qpair failed and we were unable to recover it. 00:25:56.335 [2024-07-15 11:52:04.138909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-07-15 11:52:04.138934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.139897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.139921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.140068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.140093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.140254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.336 [2024-07-15 11:52:04.140494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.140519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 [2024-07-15 11:52:04.140652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 [2024-07-15 11:52:04.140676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.140815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.140840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.140970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.140995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.141894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.141919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.142888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.142913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.143968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.143993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.144121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-07-15 11:52:04.144149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3a44000b90 with addr=10.0.0.2, port=4420 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.144287] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.336 [2024-07-15 11:52:04.146688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.146859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.146888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.146904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.146916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.146950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.336 11:52:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3132742 00:25:56.336 [2024-07-15 11:52:04.156678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.156773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.156799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.156813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.156825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.156855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.166652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.166778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.166805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.166820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.166832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.166862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.176590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.176711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.176750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.176768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.176780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.176810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.186645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.186765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.186792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.186806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.186818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.186848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.196658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.196765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.196791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.196806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.196818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.196848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.206705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.206808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.206839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.206855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.336 [2024-07-15 11:52:04.206867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.336 [2024-07-15 11:52:04.206896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.336 qpair failed and we were unable to recover it. 00:25:56.336 [2024-07-15 11:52:04.216779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.336 [2024-07-15 11:52:04.216879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.336 [2024-07-15 11:52:04.216906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.336 [2024-07-15 11:52:04.216921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.216933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.216962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.226675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.226775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.226801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.226816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.226828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.226858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.236755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.236850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.236877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.236892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.236904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.236933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.246782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.246908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.246935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.246949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.246961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.246996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.256897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.257000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.257027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.257042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.257054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.257083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.266839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.266962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.266988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.267001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.267013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.267043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.276875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.276971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.276998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.277012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.277024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.277054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.337 [2024-07-15 11:52:04.286884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.337 [2024-07-15 11:52:04.286977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.337 [2024-07-15 11:52:04.287003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.337 [2024-07-15 11:52:04.287018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.337 [2024-07-15 11:52:04.287031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.337 [2024-07-15 11:52:04.287060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.337 qpair failed and we were unable to recover it. 00:25:56.597 [2024-07-15 11:52:04.296886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.597 [2024-07-15 11:52:04.296988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.597 [2024-07-15 11:52:04.297019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.597 [2024-07-15 11:52:04.297035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.597 [2024-07-15 11:52:04.297047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.597 [2024-07-15 11:52:04.297076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.597 qpair failed and we were unable to recover it. 00:25:56.597 [2024-07-15 11:52:04.306940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.597 [2024-07-15 11:52:04.307043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.597 [2024-07-15 11:52:04.307069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.597 [2024-07-15 11:52:04.307083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.597 [2024-07-15 11:52:04.307095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.597 [2024-07-15 11:52:04.307124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.597 qpair failed and we were unable to recover it. 00:25:56.597 [2024-07-15 11:52:04.316961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.597 [2024-07-15 11:52:04.317060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.597 [2024-07-15 11:52:04.317086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.597 [2024-07-15 11:52:04.317100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.597 [2024-07-15 11:52:04.317112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.597 [2024-07-15 11:52:04.317141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.597 qpair failed and we were unable to recover it. 00:25:56.597 [2024-07-15 11:52:04.327035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.597 [2024-07-15 11:52:04.327129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.327155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.327170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.327183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.327212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.337025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.337165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.337191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.337206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.337218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.337253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.347115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.347229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.347255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.347269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.347282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.347311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.357051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.357186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.357212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.357226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.357238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.357267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.367122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.367233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.367259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.367273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.367285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.367315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.377190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.377307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.377333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.377347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.377359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.377388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.387179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.387287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.387318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.387334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.387346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.387381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.397228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.397337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.397363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.397378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.397390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.397420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.407265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.407373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.407401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.407416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.407428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.407458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.417263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.417411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.417438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.417453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.417465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.417494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.427391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.427513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.427539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.427554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.427572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.427602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.437394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.437503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.437528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.437543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.437555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.437584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.447355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.447507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.447533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.447548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.447560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.447588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.457382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.457498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.457524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.457538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.457551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.457579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.467440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.467551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.467578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.467593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.467605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.467635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.598 qpair failed and we were unable to recover it. 00:25:56.598 [2024-07-15 11:52:04.477471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.598 [2024-07-15 11:52:04.477581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.598 [2024-07-15 11:52:04.477607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.598 [2024-07-15 11:52:04.477622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.598 [2024-07-15 11:52:04.477634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.598 [2024-07-15 11:52:04.477663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.487444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.487555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.487582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.487596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.487608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.487647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.497520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.497633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.497659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.497673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.497686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.497719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.507502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.507632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.507658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.507673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.507685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.507714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.517530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.517641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.517668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.517688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.517700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.517729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.527547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.527689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.527715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.527729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.527752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.527783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.537626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.537746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.537772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.537786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.537798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.537828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.547596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.547710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.547745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.547762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.547775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.547804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.557629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.557752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.557778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.557793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.557805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.557839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.567657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.567771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.567798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.567812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.567824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.567858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.599 [2024-07-15 11:52:04.577712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.599 [2024-07-15 11:52:04.577837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.599 [2024-07-15 11:52:04.577863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.599 [2024-07-15 11:52:04.577877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.599 [2024-07-15 11:52:04.577889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.599 [2024-07-15 11:52:04.577920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.599 qpair failed and we were unable to recover it. 00:25:56.859 [2024-07-15 11:52:04.587755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.859 [2024-07-15 11:52:04.587861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.859 [2024-07-15 11:52:04.587888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.859 [2024-07-15 11:52:04.587903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.859 [2024-07-15 11:52:04.587916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.587946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.597758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.597856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.597882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.597898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.597911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.597940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.607783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.607876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.607902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.607921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.607934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.607963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.617841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.617942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.617967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.617982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.617994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.618023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.627836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.627933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.627958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.627973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.627985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.628014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.637930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.638029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.638055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.638069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.638082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.638111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.647972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.648082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.648117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.648132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.648144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.648173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.657960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.658062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.658088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.658103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.658115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.658143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.667982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.668101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.668126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.668141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.668153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.668181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.677999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.678104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.678130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.678144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.678156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.678185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.687998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.688114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.688139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.688153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.688166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.688194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.698114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.698231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.698262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.698277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.698290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.698319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-07-15 11:52:04.708079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.860 [2024-07-15 11:52:04.708208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.860 [2024-07-15 11:52:04.708233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.860 [2024-07-15 11:52:04.708246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.860 [2024-07-15 11:52:04.708259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.860 [2024-07-15 11:52:04.708288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.718181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.718312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.718338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.718352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.718365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.718393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.728166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.728273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.728299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.728314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.728326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.728354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.738164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.738279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.738304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.738318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.738330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.738366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.748204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.748343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.748369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.748384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.748396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.748425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.758230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.758369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.758395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.758410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.758422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.758452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.768244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.768404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.768430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.768445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.768457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.768486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.778293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.778406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.778431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.778446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.778458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.778486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.788300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.788410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.788441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.788457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.788469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.788498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.798333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.798440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.798466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.798480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.798493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.798522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.808336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.808453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.808479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.808493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.808505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.808534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.818403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.818552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.818577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.818592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.818604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.818632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.828420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.861 [2024-07-15 11:52:04.828535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.861 [2024-07-15 11:52:04.828561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.861 [2024-07-15 11:52:04.828575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.861 [2024-07-15 11:52:04.828593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.861 [2024-07-15 11:52:04.828623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-07-15 11:52:04.838394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.862 [2024-07-15 11:52:04.838505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.862 [2024-07-15 11:52:04.838532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.862 [2024-07-15 11:52:04.838547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.862 [2024-07-15 11:52:04.838559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:56.862 [2024-07-15 11:52:04.838589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.862 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.848439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.848550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.848576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.848591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.848603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.848634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.858504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.858616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.858641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.858656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.858668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.858697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.868501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.868618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.868643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.868657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.868670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.868700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.878509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.878624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.878652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.878669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.878682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.878714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.888612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.888718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.888751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.888771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.888784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.888813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.898632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.898757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.898792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.898807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.898820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.898848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.908623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.908734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.908766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.908781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.908793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.908822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.918666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.918791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.918817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.918838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.918851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.918880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.928696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.928871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.928897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.928911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.928924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.928953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.938703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.938877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.938903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.938918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.938930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.938959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.948700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.948847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.948875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.948890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.948905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.948935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.959021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.959158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.959182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.122 [2024-07-15 11:52:04.959196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.122 [2024-07-15 11:52:04.959208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.122 [2024-07-15 11:52:04.959237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.122 qpair failed and we were unable to recover it. 00:25:57.122 [2024-07-15 11:52:04.968889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.122 [2024-07-15 11:52:04.969010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.122 [2024-07-15 11:52:04.969035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:04.969050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:04.969062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:04.969091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:04.978941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:04.979050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:04.979076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:04.979090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:04.979103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:04.979132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:04.988923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:04.989050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:04.989076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:04.989090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:04.989103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:04.989131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:04.998864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:04.998957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:04.998982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:04.998996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:04.999008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:04.999037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.008924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.009021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.009047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.009067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.009080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.009109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.018954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.019052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.019077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.019092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.019104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.019133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.029011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.029134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.029159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.029173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.029185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.029214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.039028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.039121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.039147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.039162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.039174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.039203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.049058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.049166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.049192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.049206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.049218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.049247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.059122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.059235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.059260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.059275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.059287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.059316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.069088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.069202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.069229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.069243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.069255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.069283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.079170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.079294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.079320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.079335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.079347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.079376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.089141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.089253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.089279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.089294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.089306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.089335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.123 [2024-07-15 11:52:05.099239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.123 [2024-07-15 11:52:05.099356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.123 [2024-07-15 11:52:05.099386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.123 [2024-07-15 11:52:05.099402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.123 [2024-07-15 11:52:05.099414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.123 [2024-07-15 11:52:05.099444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.123 qpair failed and we were unable to recover it. 00:25:57.381 [2024-07-15 11:52:05.109213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.381 [2024-07-15 11:52:05.109315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.381 [2024-07-15 11:52:05.109342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.381 [2024-07-15 11:52:05.109357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.109370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.109399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.119246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.119353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.119379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.119394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.119406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.119435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.129212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.129319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.129345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.129360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.129372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.129402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.139321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.139453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.139479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.139494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.139507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.139542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.149319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.149477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.149504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.149519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.149531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.149560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.159412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.159524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.159549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.159564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.159577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.159606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.169367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.169521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.169547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.169561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.169573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.169603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.179392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.179513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.179538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.179553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.179565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.179594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.189415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.189532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.189565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.189581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.189593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.189622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.199480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.199588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.199614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.199628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.199641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.199670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.209506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.209643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.209669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.209683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.382 [2024-07-15 11:52:05.209696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.382 [2024-07-15 11:52:05.209725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.382 qpair failed and we were unable to recover it. 00:25:57.382 [2024-07-15 11:52:05.219519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.382 [2024-07-15 11:52:05.219633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.382 [2024-07-15 11:52:05.219659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.382 [2024-07-15 11:52:05.219674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.219686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.219715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.229551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.229660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.229686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.229701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.229719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.229755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.239589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.239718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.239750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.239767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.239779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.239808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.249546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.249662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.249690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.249705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.249717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.249754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.259708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.259884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.259910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.259925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.259937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.259966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.269701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.269831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.269857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.269872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.269884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.269913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.279649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.279790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.279816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.279831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.279843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.279873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.289680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.289819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.289847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.289862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.289875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.289904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.299757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.299863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.299889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.299903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.299915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.299946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.309777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.309923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.309949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.309963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.309975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.310004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.319799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.383 [2024-07-15 11:52:05.319897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.383 [2024-07-15 11:52:05.319922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.383 [2024-07-15 11:52:05.319937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.383 [2024-07-15 11:52:05.319955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.383 [2024-07-15 11:52:05.319985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.383 qpair failed and we were unable to recover it. 00:25:57.383 [2024-07-15 11:52:05.329824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.384 [2024-07-15 11:52:05.329931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.384 [2024-07-15 11:52:05.329957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.384 [2024-07-15 11:52:05.329971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.384 [2024-07-15 11:52:05.329983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.384 [2024-07-15 11:52:05.330012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.384 qpair failed and we were unable to recover it. 00:25:57.384 [2024-07-15 11:52:05.339834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.384 [2024-07-15 11:52:05.339947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.384 [2024-07-15 11:52:05.339973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.384 [2024-07-15 11:52:05.339988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.384 [2024-07-15 11:52:05.340001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.384 [2024-07-15 11:52:05.340029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.384 qpair failed and we were unable to recover it. 00:25:57.384 [2024-07-15 11:52:05.349842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.384 [2024-07-15 11:52:05.349942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.384 [2024-07-15 11:52:05.349968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.384 [2024-07-15 11:52:05.349982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.384 [2024-07-15 11:52:05.349995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.384 [2024-07-15 11:52:05.350025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.384 qpair failed and we were unable to recover it. 00:25:57.384 [2024-07-15 11:52:05.359885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.384 [2024-07-15 11:52:05.359978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.384 [2024-07-15 11:52:05.360005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.384 [2024-07-15 11:52:05.360019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.384 [2024-07-15 11:52:05.360031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.384 [2024-07-15 11:52:05.360060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.384 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.369954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.370051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.370078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.370092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.370105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.370134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.380067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.380194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.380220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.380235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.380247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.380277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.390202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.390310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.390337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.390352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.390364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.390393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.399996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.400100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.400127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.400141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.400154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.400182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.410014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.410137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.410162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.410185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.410198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.410227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.420072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.420190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.420216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.420231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.420244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.420273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.430089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.430205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.430231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.430245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.430258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.430287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.440168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.440278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.440304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.440320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.440334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.440364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.450158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.450273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.450299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.450314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.450326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.450355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.460200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.460318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.460344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.460358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.460371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.460400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.470269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.470415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.470441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.470455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.470467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.470496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.480231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.480346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.480371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.480386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.480398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.480427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.490276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.490392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.490419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.490433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.490446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.490475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.500355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.642 [2024-07-15 11:52:05.500467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.642 [2024-07-15 11:52:05.500499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.642 [2024-07-15 11:52:05.500514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.642 [2024-07-15 11:52:05.500527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.642 [2024-07-15 11:52:05.500556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-07-15 11:52:05.510422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.510530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.510556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.510570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.510583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.510612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.520398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.520541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.520566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.520580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.520593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.520621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.530462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.530572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.530598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.530612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.530624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.530654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.540463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.540577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.540602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.540617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.540629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.540664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.550509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.550627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.550653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.550669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.550681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.550710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.560473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.560576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.560602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.560616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.560629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.560658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.570519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.570628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.570654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.570669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.570681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.570710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.580582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.580693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.580719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.580733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.580755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.580785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.590547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.590651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.590682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.590697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.590709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.590751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.600605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.600744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.600770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.600784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.600796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.600825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.610605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.610710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.610736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.610761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.610774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.610803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-07-15 11:52:05.620693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.643 [2024-07-15 11:52:05.620828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.643 [2024-07-15 11:52:05.620854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.643 [2024-07-15 11:52:05.620869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.643 [2024-07-15 11:52:05.620881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.643 [2024-07-15 11:52:05.620910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.630758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.630860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.630885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.630899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.630911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.630946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.640686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.640822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.640848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.640862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.640874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.640904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.650790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.650884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.650909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.650923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.650935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.650964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.660795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.660897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.660923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.660937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.660949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.660978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.670800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.670895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.670920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.670935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.670947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.670976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.680821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.901 [2024-07-15 11:52:05.680922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.901 [2024-07-15 11:52:05.680948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.901 [2024-07-15 11:52:05.680963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.901 [2024-07-15 11:52:05.680975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.901 [2024-07-15 11:52:05.681004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-07-15 11:52:05.690875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.690976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.691001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.691014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.691027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.691056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.700948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.701056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.701082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.701096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.701108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.701138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.710937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.711028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.711053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.711067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.711079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.711108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.721013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.721137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.721162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.721176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.721199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.721229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.731011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.731133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.731157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.731171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.731183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.731213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.741066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.741187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.741211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.741225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.741237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.741266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.751136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.751289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.751315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.751330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.751342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.751381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.761071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.761207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.761232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.761247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.761259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.761287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.771123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.771255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.771280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.771295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.771308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.771337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.781178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.781287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.781313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.781328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.781340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.781369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.791197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.902 [2024-07-15 11:52:05.791305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.902 [2024-07-15 11:52:05.791331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.902 [2024-07-15 11:52:05.791346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.902 [2024-07-15 11:52:05.791358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.902 [2024-07-15 11:52:05.791387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-07-15 11:52:05.801249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.801360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.801385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.801399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.801411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.801440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.811339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.811431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.811458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.811478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.811491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.811520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.821294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.821409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.821435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.821449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.821461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.821490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.831273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.831379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.831405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.831420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.831432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.831461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.841350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.841484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.841510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.841524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.841536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.841565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.851309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.851446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.851472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.851486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.851498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.851527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.861417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.861527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.861553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.861568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.861580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.861609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.871408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.871523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.871549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.871564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.871576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.871605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-07-15 11:52:05.881426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.903 [2024-07-15 11:52:05.881531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.903 [2024-07-15 11:52:05.881557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.903 [2024-07-15 11:52:05.881571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.903 [2024-07-15 11:52:05.881583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:57.903 [2024-07-15 11:52:05.881612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.903 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.891526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.162 [2024-07-15 11:52:05.891634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.162 [2024-07-15 11:52:05.891663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.162 [2024-07-15 11:52:05.891678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.162 [2024-07-15 11:52:05.891691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.162 [2024-07-15 11:52:05.891720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.162 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.901586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.162 [2024-07-15 11:52:05.901720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.162 [2024-07-15 11:52:05.901761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.162 [2024-07-15 11:52:05.901783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.162 [2024-07-15 11:52:05.901795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.162 [2024-07-15 11:52:05.901824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.162 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.911511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.162 [2024-07-15 11:52:05.911621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.162 [2024-07-15 11:52:05.911647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.162 [2024-07-15 11:52:05.911662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.162 [2024-07-15 11:52:05.911674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.162 [2024-07-15 11:52:05.911703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.162 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.921548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.162 [2024-07-15 11:52:05.921706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.162 [2024-07-15 11:52:05.921732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.162 [2024-07-15 11:52:05.921757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.162 [2024-07-15 11:52:05.921770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.162 [2024-07-15 11:52:05.921810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.162 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.931536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.162 [2024-07-15 11:52:05.931670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.162 [2024-07-15 11:52:05.931697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.162 [2024-07-15 11:52:05.931712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.162 [2024-07-15 11:52:05.931725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.162 [2024-07-15 11:52:05.931764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.162 qpair failed and we were unable to recover it. 00:25:58.162 [2024-07-15 11:52:05.941579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.941697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.941726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.941759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.941774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.941808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:05.951667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.951783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.951809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.951824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.951836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.951864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:05.961679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.961793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.961820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.961834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.961846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.961876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:05.971687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.971804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.971830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.971845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.971857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.971886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:05.981760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.981857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.981882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.981896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.981908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.981937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:05.991797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:05.991892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:05.991923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:05.991938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:05.991951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:05.991979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.001834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.001976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.002001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.002016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.002028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.002057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.011812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.011906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.011932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.011947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.011959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.011988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.021859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.021956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.021984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.021998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.022011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.022039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.031868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.031962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.031987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.032001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.032014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.032049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.041937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.042047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.042072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.042086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.042099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.042128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.051945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.052041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.052066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.052081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.052093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.052122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.061986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.062084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.062109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.062124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.062136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.062165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.071954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.072048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.072074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.072088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.163 [2024-07-15 11:52:06.072100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.163 [2024-07-15 11:52:06.072129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.163 qpair failed and we were unable to recover it. 00:25:58.163 [2024-07-15 11:52:06.082027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.163 [2024-07-15 11:52:06.082166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.163 [2024-07-15 11:52:06.082195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.163 [2024-07-15 11:52:06.082210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.082222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.082261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.092050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.092174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.092200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.092214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.092226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.092255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.102114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.102225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.102250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.102265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.102277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.102306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.112098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.112214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.112240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.112254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.112266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.112294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.122150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.122273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.122299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.122313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.122333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.122373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.132189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.132297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.132323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.132337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.132349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.132378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.164 [2024-07-15 11:52:06.142172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.164 [2024-07-15 11:52:06.142282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.164 [2024-07-15 11:52:06.142307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.164 [2024-07-15 11:52:06.142322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.164 [2024-07-15 11:52:06.142333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.164 [2024-07-15 11:52:06.142362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.164 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.152302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.152441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.152466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.152481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.152493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.152530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.162266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.162379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.162405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.162419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.162431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.162460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.172363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.172480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.172505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.172520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.172532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.172561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.182338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.182448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.182473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.182487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.182499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.182528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.192317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.192425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.192450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.192464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.192476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.192507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.202356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.202460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.202485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.202500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.202512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.202541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.212384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.212489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.212514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.212534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.212547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.212576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.222425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.222535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.222561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.222575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.222587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.222616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.232409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.232540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.232566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.232580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.232592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.232621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.423 [2024-07-15 11:52:06.242465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.423 [2024-07-15 11:52:06.242598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.423 [2024-07-15 11:52:06.242624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.423 [2024-07-15 11:52:06.242638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.423 [2024-07-15 11:52:06.242650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.423 [2024-07-15 11:52:06.242681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.423 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.252514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.252646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.252672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.252686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.252698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.252726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.262512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.262627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.262652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.262666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.262679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.262707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.272567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.272689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.272715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.272729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.272747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.272777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.282579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.282687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.282712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.282727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.282746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.282777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.292615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.292724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.292756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.292772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.292784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.292813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.302713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.302835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.302860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.302922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.302936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.302965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.312735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.312904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.312930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.312944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.312956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.312985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.322734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.322854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.322879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.322893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.322906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.322935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.332735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.332838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.332864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.332878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.332890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.332919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.342793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.342894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.342920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.342934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.342946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.342976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.352908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.353006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.353036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.353050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.353063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.353091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.362822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.362926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.362952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.362966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.362978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.363008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.372849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.372942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.372967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.372981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.372993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.373022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.382894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.382991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.383016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.383030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.383042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.383070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.392922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.393034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.393065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.393080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.393092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.393121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.424 [2024-07-15 11:52:06.402907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.424 [2024-07-15 11:52:06.403005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.424 [2024-07-15 11:52:06.403031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.424 [2024-07-15 11:52:06.403045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.424 [2024-07-15 11:52:06.403057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.424 [2024-07-15 11:52:06.403086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.424 qpair failed and we were unable to recover it. 00:25:58.683 [2024-07-15 11:52:06.412925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.683 [2024-07-15 11:52:06.413020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.683 [2024-07-15 11:52:06.413047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.683 [2024-07-15 11:52:06.413062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.683 [2024-07-15 11:52:06.413074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.683 [2024-07-15 11:52:06.413104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.683 qpair failed and we were unable to recover it. 00:25:58.683 [2024-07-15 11:52:06.423003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.683 [2024-07-15 11:52:06.423131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.683 [2024-07-15 11:52:06.423158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.683 [2024-07-15 11:52:06.423173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.683 [2024-07-15 11:52:06.423186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.683 [2024-07-15 11:52:06.423216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.683 qpair failed and we were unable to recover it. 00:25:58.683 [2024-07-15 11:52:06.432974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.683 [2024-07-15 11:52:06.433084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.683 [2024-07-15 11:52:06.433110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.683 [2024-07-15 11:52:06.433125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.683 [2024-07-15 11:52:06.433137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.683 [2024-07-15 11:52:06.433178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.683 qpair failed and we were unable to recover it. 00:25:58.683 [2024-07-15 11:52:06.443066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.683 [2024-07-15 11:52:06.443222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.683 [2024-07-15 11:52:06.443256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.683 [2024-07-15 11:52:06.443271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.683 [2024-07-15 11:52:06.443283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.683 [2024-07-15 11:52:06.443312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.683 qpair failed and we were unable to recover it. 00:25:58.683 [2024-07-15 11:52:06.453042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.683 [2024-07-15 11:52:06.453163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.683 [2024-07-15 11:52:06.453188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.683 [2024-07-15 11:52:06.453202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.683 [2024-07-15 11:52:06.453213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.453243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.463108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.463220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.463245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.463259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.463271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.463301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.473143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.473261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.473287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.473302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.473314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.473343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.483164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.483274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.483305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.483320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.483332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.483360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.493173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.493283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.493309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.493323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.493335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.493364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.503261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.503399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.503425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.503439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.503451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.503480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.513237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.513360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.513386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.513401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.513413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.513442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.523300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.523419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.523444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.523459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.523476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.523506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.533297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.533404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.533429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.533443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.533455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.533483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.543413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.543532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.543557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.543571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.543583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.543612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.553313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.553435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.553461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.553475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.553487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.553516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.563378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.563485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.563511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.563525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.563537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.563566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.573389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.573498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.573524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.573538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.573550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.573579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.583462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.583574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.583600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.583615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.583627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.583656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.593490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.684 [2024-07-15 11:52:06.593597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.684 [2024-07-15 11:52:06.593622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.684 [2024-07-15 11:52:06.593637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.684 [2024-07-15 11:52:06.593649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.684 [2024-07-15 11:52:06.593677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.684 qpair failed and we were unable to recover it. 00:25:58.684 [2024-07-15 11:52:06.603511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.603670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.603696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.603710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.603722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.603759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.613575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.613755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.613791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.613812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.613826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.613858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.623646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.623783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.623809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.623824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.623836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.623865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.633530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.633638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.633664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.633678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.633690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.633720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.643570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.643677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.643702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.643717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.643729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.643765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.653610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.653719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.653752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.653768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.653780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.653809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.685 [2024-07-15 11:52:06.663688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.685 [2024-07-15 11:52:06.663850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.685 [2024-07-15 11:52:06.663876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.685 [2024-07-15 11:52:06.663891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.685 [2024-07-15 11:52:06.663903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.685 [2024-07-15 11:52:06.663932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.685 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.673693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.673861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.673889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.673904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.673916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.673945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.683732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.683834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.683860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.683875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.683887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.683916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.693733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.693838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.693863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.693878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.693890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.693919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.703804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.703904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.703930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.703950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.703963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.703992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.713800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.713904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.713930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.713944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.713956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.713985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.723804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.723907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.723933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.723947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.723960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.723989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.733923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.734015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.734041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.734056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.734068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.734096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.743927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.744036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.744061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.744075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.744087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.744115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.753944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.754048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.754074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.754088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.754100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.754128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.763925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.764027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.764052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.764066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.764078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.764108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.773965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.774066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.774099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.774113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.774125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.774155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.784082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.944 [2024-07-15 11:52:06.784203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.944 [2024-07-15 11:52:06.784230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.944 [2024-07-15 11:52:06.784244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.944 [2024-07-15 11:52:06.784256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.944 [2024-07-15 11:52:06.784285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.944 qpair failed and we were unable to recover it. 00:25:58.944 [2024-07-15 11:52:06.794052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.794158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.794192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.794208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.794220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.794249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.804148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.804294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.804320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.804334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.804346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.804376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.814103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.814207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.814232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.814247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.814259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.814288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.824141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.824258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.824283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.824297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.824309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.824338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.834160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.834268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.834293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.834307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.834319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.834353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.844205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.844345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.844371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.844385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.844397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.844426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.854226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.854378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.854404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.854418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.854431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.854460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.864317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.864433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.864458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.864472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.864484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.864513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.874275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.874385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.874412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.874426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.874438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.874467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.884289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.884402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.884436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.884451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.884463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.884492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.894350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.894455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.894480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.894495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.894507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.894537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.904456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.904571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.904597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.904611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.904624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.904653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.914472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.914613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.914640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.914656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.914668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.914703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:58.945 [2024-07-15 11:52:06.924407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.945 [2024-07-15 11:52:06.924517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.945 [2024-07-15 11:52:06.924543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.945 [2024-07-15 11:52:06.924557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.945 [2024-07-15 11:52:06.924575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:58.945 [2024-07-15 11:52:06.924605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:58.945 qpair failed and we were unable to recover it. 00:25:59.205 [2024-07-15 11:52:06.934461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.205 [2024-07-15 11:52:06.934606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.205 [2024-07-15 11:52:06.934632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.205 [2024-07-15 11:52:06.934646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.205 [2024-07-15 11:52:06.934658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.205 [2024-07-15 11:52:06.934687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.205 qpair failed and we were unable to recover it. 00:25:59.205 [2024-07-15 11:52:06.944448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.205 [2024-07-15 11:52:06.944565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.205 [2024-07-15 11:52:06.944589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.205 [2024-07-15 11:52:06.944603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.205 [2024-07-15 11:52:06.944615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.205 [2024-07-15 11:52:06.944644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.205 qpair failed and we were unable to recover it. 00:25:59.205 [2024-07-15 11:52:06.954512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.205 [2024-07-15 11:52:06.954622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.205 [2024-07-15 11:52:06.954648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:06.954662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:06.954674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:06.954713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:06.964526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:06.964645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:06.964671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:06.964686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:06.964697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:06.964726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:06.974543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:06.974650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:06.974676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:06.974691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:06.974703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:06.974731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:06.984595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:06.984711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:06.984744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:06.984760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:06.984772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:06.984802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:06.994574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:06.994692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:06.994717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:06.994731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:06.994751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:06.994781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.004675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.004803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.004829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.004844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.004856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.004886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.014673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.014800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.014826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.014840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.014858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.014890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.024763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.024917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.024943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.024956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.024968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.024998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.034706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.034832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.034859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.034873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.034885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.034915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.044765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.044856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.044882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.044896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.044908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.044937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.054792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.054887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.054912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.054926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.054938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.054967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.064822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.064929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.064955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.064969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.064981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.065010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.074827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.074923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.074949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.074963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.074975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.075010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.084845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.084936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.084961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.084975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.084988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.085016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.206 qpair failed and we were unable to recover it. 00:25:59.206 [2024-07-15 11:52:07.094898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.206 [2024-07-15 11:52:07.095011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.206 [2024-07-15 11:52:07.095036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.206 [2024-07-15 11:52:07.095050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.206 [2024-07-15 11:52:07.095068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.206 [2024-07-15 11:52:07.095096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.104944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.105042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.105067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.105087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.105099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.105129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.114996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.115123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.115149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.115164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.115176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.115205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.125041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.125133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.125158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.125173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.125185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.125214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.135055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.135162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.135188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.135202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.135222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.135251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.145163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.145299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.145324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.145339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.145351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.145380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.155108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.155234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.155265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.155279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.155290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.155319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.165105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.165220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.165245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.165259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.165271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.165300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.175185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.175293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.175319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.175333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.175345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.175375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.207 [2024-07-15 11:52:07.185184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.207 [2024-07-15 11:52:07.185295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.207 [2024-07-15 11:52:07.185321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.207 [2024-07-15 11:52:07.185335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.207 [2024-07-15 11:52:07.185347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.207 [2024-07-15 11:52:07.185376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.207 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.195230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.195343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.195372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.195388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.195400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.195429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.205189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.205302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.205328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.205342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.205354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.205383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.215232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.215377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.215403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.215417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.215429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.215457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.225272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.225383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.225409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.225423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.225434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.225464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.235294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.235412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.235437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.235451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.235463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.235498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.245309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.245432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.245458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.245472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.245484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.245513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.255406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.255510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.255536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.255550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.255562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.255591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.265355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.265468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.265493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.265507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.265519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.466 [2024-07-15 11:52:07.265548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.466 qpair failed and we were unable to recover it. 00:25:59.466 [2024-07-15 11:52:07.275379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.466 [2024-07-15 11:52:07.275488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.466 [2024-07-15 11:52:07.275514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.466 [2024-07-15 11:52:07.275528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.466 [2024-07-15 11:52:07.275540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.275568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.285427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.285534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.285565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.285580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.285592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.285621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.295420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.295553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.295578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.295592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.295604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.295633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.305525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.305666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.305691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.305706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.305718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.305753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.315456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.315566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.315592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.315606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.315618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.315646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.325497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.325604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.325629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.325643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.325655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.325689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.335563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.335672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.335698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.335712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.335724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.335763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.345637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.345757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.345782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.345796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.345808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.345837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.355690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.355824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.355850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.355864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.355877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.355905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.365628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.365747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.365774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.365789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.365801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.365831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.375662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.375811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.375837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.375852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.375863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.375892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.385791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.385893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.385919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.385934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.385946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.385975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.395792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.395892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.395918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.395932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.395944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.395972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.405766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.405886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.405913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.405929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.405942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.405972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.415801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.415914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.467 [2024-07-15 11:52:07.415940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.467 [2024-07-15 11:52:07.415954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.467 [2024-07-15 11:52:07.415971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.467 [2024-07-15 11:52:07.416001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.467 qpair failed and we were unable to recover it. 00:25:59.467 [2024-07-15 11:52:07.425954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.467 [2024-07-15 11:52:07.426069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.468 [2024-07-15 11:52:07.426094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.468 [2024-07-15 11:52:07.426109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.468 [2024-07-15 11:52:07.426121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.468 [2024-07-15 11:52:07.426151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.468 qpair failed and we were unable to recover it. 00:25:59.468 [2024-07-15 11:52:07.435874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.468 [2024-07-15 11:52:07.435974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.468 [2024-07-15 11:52:07.436000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.468 [2024-07-15 11:52:07.436015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.468 [2024-07-15 11:52:07.436027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.468 [2024-07-15 11:52:07.436056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.468 qpair failed and we were unable to recover it. 00:25:59.468 [2024-07-15 11:52:07.445901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.468 [2024-07-15 11:52:07.445999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.468 [2024-07-15 11:52:07.446024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.468 [2024-07-15 11:52:07.446038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.468 [2024-07-15 11:52:07.446050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.468 [2024-07-15 11:52:07.446080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.468 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.455905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.455996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.456021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.456036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.456052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.456083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.465962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.466059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.466085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.466099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.466111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.466140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.475959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.476059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.476085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.476099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.476111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.476139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.486102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.486231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.486257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.486272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.486283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.486313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.496050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.496162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.496187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.496201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.496214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.496242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.506112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.506265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.506291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.506311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.506324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.506353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.516090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.516200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.516225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.516239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.727 [2024-07-15 11:52:07.516252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.727 [2024-07-15 11:52:07.516280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.727 qpair failed and we were unable to recover it. 00:25:59.727 [2024-07-15 11:52:07.526161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.727 [2024-07-15 11:52:07.526311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.727 [2024-07-15 11:52:07.526337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.727 [2024-07-15 11:52:07.526351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.526363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.526393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.536223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.536328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.536356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.536370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.536382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.536410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.546191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.546310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.546336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.546350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.546362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.546391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.556280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.556419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.556445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.556459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.556472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.556500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.566338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.566481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.566507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.566521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.566533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.566563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.576254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.576344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.576370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.576385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.576396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.576425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.586321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.586470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.586496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.586510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.586522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.586550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.596337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.596449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.596480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.596495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.596507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.596536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.606301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.606410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.606435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.606449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.606461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.606490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.616327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.616444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.616470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.616484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.616497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.616526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.626402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.626557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.626582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.626597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.626609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.626639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.636406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.636540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.636566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.636580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.636592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.636626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.646447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.646559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.646585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.646600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.646611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.646641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.656466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.656578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.656604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.656619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.656631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.656660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.728 qpair failed and we were unable to recover it. 00:25:59.728 [2024-07-15 11:52:07.666519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.728 [2024-07-15 11:52:07.666631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.728 [2024-07-15 11:52:07.666657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.728 [2024-07-15 11:52:07.666671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.728 [2024-07-15 11:52:07.666683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.728 [2024-07-15 11:52:07.666711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.729 qpair failed and we were unable to recover it. 00:25:59.729 [2024-07-15 11:52:07.676508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.729 [2024-07-15 11:52:07.676624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.729 [2024-07-15 11:52:07.676650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.729 [2024-07-15 11:52:07.676665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.729 [2024-07-15 11:52:07.676677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.729 [2024-07-15 11:52:07.676705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.729 qpair failed and we were unable to recover it. 00:25:59.729 [2024-07-15 11:52:07.686519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.729 [2024-07-15 11:52:07.686624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.729 [2024-07-15 11:52:07.686655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.729 [2024-07-15 11:52:07.686670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.729 [2024-07-15 11:52:07.686682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.729 [2024-07-15 11:52:07.686711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.729 qpair failed and we were unable to recover it. 00:25:59.729 [2024-07-15 11:52:07.696561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.729 [2024-07-15 11:52:07.696669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.729 [2024-07-15 11:52:07.696693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.729 [2024-07-15 11:52:07.696707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.729 [2024-07-15 11:52:07.696719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.729 [2024-07-15 11:52:07.696756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.729 qpair failed and we were unable to recover it. 00:25:59.729 [2024-07-15 11:52:07.706755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.729 [2024-07-15 11:52:07.706863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.729 [2024-07-15 11:52:07.706888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.729 [2024-07-15 11:52:07.706903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.729 [2024-07-15 11:52:07.706915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.729 [2024-07-15 11:52:07.706944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.729 qpair failed and we were unable to recover it. 00:25:59.988 [2024-07-15 11:52:07.716648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.988 [2024-07-15 11:52:07.716812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.988 [2024-07-15 11:52:07.716838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.988 [2024-07-15 11:52:07.716853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.988 [2024-07-15 11:52:07.716865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.988 [2024-07-15 11:52:07.716895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.988 qpair failed and we were unable to recover it. 00:25:59.988 [2024-07-15 11:52:07.726794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.988 [2024-07-15 11:52:07.726908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.988 [2024-07-15 11:52:07.726933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.988 [2024-07-15 11:52:07.726948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.988 [2024-07-15 11:52:07.726960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.988 [2024-07-15 11:52:07.726995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.988 qpair failed and we were unable to recover it. 00:25:59.988 [2024-07-15 11:52:07.736714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.988 [2024-07-15 11:52:07.736830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.736855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.736870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.736882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.736910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.746800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.746903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.746928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.746942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.746954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.746983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.756758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.756863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.756889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.756904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.756916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.756945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.766818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.766911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.766937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.766951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.766963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.766992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.776838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.776933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.776964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.776980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.776993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.777021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.786878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.786986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.787011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.787026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.787038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.787068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.796905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.797006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.797032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.797054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.797066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.797095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.806881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.806971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.806997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.807011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.807023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.807051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.816956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.817046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.817070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.817084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.817101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.817130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.826972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.827078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.827103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.827117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.827129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.827157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.837097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.837216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.837242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.837256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.837269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.837298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.847008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.847131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.847155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.847169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.847181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.847210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.857018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.857156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.857181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.857195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.857207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.857246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.867105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.867222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.867247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.867261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.867273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.867302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.877148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.877303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.877329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.877344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.877356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.877384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.887143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.887300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.887326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.887340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.887352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.887381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.897155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.897304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.897332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.897347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.897361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.897391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.907248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.907361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.907386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.907406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.907419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.907448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.917336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.917480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.917506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.917521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.917533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.917562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.927289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.927399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.927424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.927438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.927450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.989 [2024-07-15 11:52:07.927480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.989 qpair failed and we were unable to recover it. 00:25:59.989 [2024-07-15 11:52:07.937300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.989 [2024-07-15 11:52:07.937409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.989 [2024-07-15 11:52:07.937434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.989 [2024-07-15 11:52:07.937448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.989 [2024-07-15 11:52:07.937460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.990 [2024-07-15 11:52:07.937489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.990 qpair failed and we were unable to recover it. 00:25:59.990 [2024-07-15 11:52:07.947416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.990 [2024-07-15 11:52:07.947541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.990 [2024-07-15 11:52:07.947574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.990 [2024-07-15 11:52:07.947588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.990 [2024-07-15 11:52:07.947600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.990 [2024-07-15 11:52:07.947638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.990 qpair failed and we were unable to recover it. 00:25:59.990 [2024-07-15 11:52:07.957353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.990 [2024-07-15 11:52:07.957505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.990 [2024-07-15 11:52:07.957531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.990 [2024-07-15 11:52:07.957546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.990 [2024-07-15 11:52:07.957558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.990 [2024-07-15 11:52:07.957586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.990 qpair failed and we were unable to recover it. 00:25:59.990 [2024-07-15 11:52:07.967381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.990 [2024-07-15 11:52:07.967487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.990 [2024-07-15 11:52:07.967513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.990 [2024-07-15 11:52:07.967527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.990 [2024-07-15 11:52:07.967539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:25:59.990 [2024-07-15 11:52:07.967567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.990 qpair failed and we were unable to recover it. 00:26:00.250 [2024-07-15 11:52:07.977416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:07.977566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:07.977592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:07.977607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:07.977623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:07.977651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:07.987434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:07.987553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:07.987578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:07.987593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:07.987604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:07.987633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:07.997417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:07.997522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:07.997551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:07.997571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:07.997584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:07.997613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.007446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.007552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.007578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.007593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.007605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.007634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.017506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.017612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.017638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.017652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.017664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.017692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.027518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.027631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.027657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.027671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.027683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.027712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.037607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.037722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.037765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.037781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.037793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.037822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.047590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.047698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.047724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.047744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.047758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.047787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.057619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.057730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.057762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.057778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.057790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.057819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.067653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.067772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.067798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.067813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.251 [2024-07-15 11:52:08.067825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.251 [2024-07-15 11:52:08.067854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.251 qpair failed and we were unable to recover it. 00:26:00.251 [2024-07-15 11:52:08.077637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.251 [2024-07-15 11:52:08.077791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.251 [2024-07-15 11:52:08.077817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.251 [2024-07-15 11:52:08.077831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.077843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.077872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.087707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.087827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.087858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.087873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.087885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.087914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.097792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.097885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.097911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.097925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.097937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.097966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.107811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.107929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.107954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.107968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.107980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.108009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.117839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.117939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.117964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.117979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.117991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.118020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.127790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.127876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.127904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.127918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.127930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.127965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.137852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.137957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.137983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.137997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.138009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.138037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.147878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.147979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.148005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.148019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.148031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.148060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.157964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.158060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.158086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.158100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.158112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.158140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.167994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.168121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.168148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.168162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.168174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.168213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.178054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.178171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.178201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.252 [2024-07-15 11:52:08.178216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.252 [2024-07-15 11:52:08.178228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.252 [2024-07-15 11:52:08.178257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.252 qpair failed and we were unable to recover it. 00:26:00.252 [2024-07-15 11:52:08.187999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.252 [2024-07-15 11:52:08.188135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.252 [2024-07-15 11:52:08.188159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.253 [2024-07-15 11:52:08.188173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.253 [2024-07-15 11:52:08.188185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.253 [2024-07-15 11:52:08.188213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.253 qpair failed and we were unable to recover it. 00:26:00.253 [2024-07-15 11:52:08.198058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.253 [2024-07-15 11:52:08.198162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.253 [2024-07-15 11:52:08.198186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.253 [2024-07-15 11:52:08.198200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.253 [2024-07-15 11:52:08.198212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.253 [2024-07-15 11:52:08.198241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.253 qpair failed and we were unable to recover it. 00:26:00.253 [2024-07-15 11:52:08.208067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.253 [2024-07-15 11:52:08.208178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.253 [2024-07-15 11:52:08.208204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.253 [2024-07-15 11:52:08.208218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.253 [2024-07-15 11:52:08.208230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.253 [2024-07-15 11:52:08.208258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.253 qpair failed and we were unable to recover it. 00:26:00.253 [2024-07-15 11:52:08.218100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.253 [2024-07-15 11:52:08.218234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.253 [2024-07-15 11:52:08.218259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.253 [2024-07-15 11:52:08.218274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.253 [2024-07-15 11:52:08.218291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.253 [2024-07-15 11:52:08.218320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.253 qpair failed and we were unable to recover it. 00:26:00.253 [2024-07-15 11:52:08.228140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.253 [2024-07-15 11:52:08.228283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.253 [2024-07-15 11:52:08.228308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.253 [2024-07-15 11:52:08.228323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.253 [2024-07-15 11:52:08.228335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.253 [2024-07-15 11:52:08.228363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.253 qpair failed and we were unable to recover it. 00:26:00.514 [2024-07-15 11:52:08.238183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.514 [2024-07-15 11:52:08.238316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.514 [2024-07-15 11:52:08.238351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.514 [2024-07-15 11:52:08.238365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.514 [2024-07-15 11:52:08.238377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.514 [2024-07-15 11:52:08.238407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.514 qpair failed and we were unable to recover it. 00:26:00.514 [2024-07-15 11:52:08.248190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.514 [2024-07-15 11:52:08.248304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.514 [2024-07-15 11:52:08.248330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.514 [2024-07-15 11:52:08.248344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.514 [2024-07-15 11:52:08.248356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.514 [2024-07-15 11:52:08.248385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.514 qpair failed and we were unable to recover it. 00:26:00.514 [2024-07-15 11:52:08.258215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.514 [2024-07-15 11:52:08.258323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.514 [2024-07-15 11:52:08.258349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.514 [2024-07-15 11:52:08.258363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.514 [2024-07-15 11:52:08.258375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.514 [2024-07-15 11:52:08.258404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.514 qpair failed and we were unable to recover it. 00:26:00.514 [2024-07-15 11:52:08.268251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.514 [2024-07-15 11:52:08.268387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.514 [2024-07-15 11:52:08.268413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.514 [2024-07-15 11:52:08.268427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.514 [2024-07-15 11:52:08.268439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.514 [2024-07-15 11:52:08.268468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.514 qpair failed and we were unable to recover it. 00:26:00.514 [2024-07-15 11:52:08.278272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.514 [2024-07-15 11:52:08.278411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.514 [2024-07-15 11:52:08.278437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.514 [2024-07-15 11:52:08.278451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.514 [2024-07-15 11:52:08.278463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.278491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.288252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.288354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.288379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.288393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.288405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.288435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.298400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.298534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.298560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.298574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.298586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.298615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.308415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.308529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.308554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.308578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.308591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.308620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.318345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.318466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.318491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.318506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.318518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.318547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.328440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.328576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.328601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.328615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.328628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.328656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.338422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.338526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.338552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.338566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.338578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.338607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.348481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.348598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.348624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.348638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.348650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.348679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.358547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.358669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.358695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.358709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.358721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.358756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.368567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.368675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.368700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.368714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.368726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.368762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.378589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.378714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.515 [2024-07-15 11:52:08.378760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.515 [2024-07-15 11:52:08.378775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.515 [2024-07-15 11:52:08.378787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.515 [2024-07-15 11:52:08.378816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.515 qpair failed and we were unable to recover it. 00:26:00.515 [2024-07-15 11:52:08.388579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.515 [2024-07-15 11:52:08.388677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.388701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.388715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.388752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.388784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.398610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.398703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.398750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.398772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.398785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.398815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.408652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.408769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.408794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.408809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.408821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.408860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.418704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.418864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.418889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.418904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.418916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.418945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.428712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.428841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.428868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.428883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.428897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.428928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.438704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.438821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.438848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.438864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.438876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.438906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.448753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.448847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.448872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.448886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.448899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.448929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.458784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.458884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.458909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.458923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.458937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.458967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.468822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.468924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.468947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.468962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.468974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.469005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.478831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.478952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.478979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.478994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.516 [2024-07-15 11:52:08.479007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.516 [2024-07-15 11:52:08.479052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.516 qpair failed and we were unable to recover it. 00:26:00.516 [2024-07-15 11:52:08.488856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.516 [2024-07-15 11:52:08.488981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.516 [2024-07-15 11:52:08.489010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.516 [2024-07-15 11:52:08.489040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.517 [2024-07-15 11:52:08.489053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.517 [2024-07-15 11:52:08.489081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.517 qpair failed and we were unable to recover it. 00:26:00.517 [2024-07-15 11:52:08.498837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.517 [2024-07-15 11:52:08.498935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.517 [2024-07-15 11:52:08.498961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.517 [2024-07-15 11:52:08.498975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.517 [2024-07-15 11:52:08.498987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.517 [2024-07-15 11:52:08.499017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.517 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.508930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.509103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.509128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.509143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.509156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.509186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.518955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.519068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.519094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.519109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.519121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.519150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.528958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.529052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.529076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.529091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.529104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.529154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.539061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.539176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.539200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.539215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.539227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.539256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.549069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.549170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.549195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.549209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.549222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.549251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.559092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.776 [2024-07-15 11:52:08.559236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.776 [2024-07-15 11:52:08.559261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.776 [2024-07-15 11:52:08.559275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.776 [2024-07-15 11:52:08.559288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.776 [2024-07-15 11:52:08.559316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.776 qpair failed and we were unable to recover it. 00:26:00.776 [2024-07-15 11:52:08.569132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.569224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.569248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.569263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.569275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.569304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.579087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.579182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.579211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.579226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.579239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.579268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.589187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.589326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.589350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.589365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.589377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.589406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.599182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.599282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.599308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.599322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.599334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.599364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.609224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.609315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.609340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.609354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.609366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.609394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.619237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.619336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.619360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.619375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.619392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.619422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.629320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.629418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.629442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.629456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.629468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.629497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.639297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.639395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.639420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.639434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.639447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.639475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.649359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.649490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.649514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.649528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.649540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.649569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.659387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.659501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.659526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.659540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.659553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.659581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.777 [2024-07-15 11:52:08.669394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.777 [2024-07-15 11:52:08.669537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.777 [2024-07-15 11:52:08.669562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.777 [2024-07-15 11:52:08.669577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.777 [2024-07-15 11:52:08.669589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.777 [2024-07-15 11:52:08.669618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.777 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.679370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.679507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.679532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.679546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.679558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.679587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.689375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.689507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.689533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.689547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.689562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.689593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.699428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.699547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.699571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.699587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.699599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.699630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.709478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.709596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.709620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.709639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.709657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.709688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.719555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.719652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.719676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.719690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.719702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.719753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.729485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.729573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.729596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.729611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.729623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.729652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.739528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.739623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.739646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.739660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.739672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.739701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.749591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.749709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.749758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.749774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.749787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.749817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:00.778 [2024-07-15 11:52:08.759604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.778 [2024-07-15 11:52:08.759700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.778 [2024-07-15 11:52:08.759748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.778 [2024-07-15 11:52:08.759765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.778 [2024-07-15 11:52:08.759778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:00.778 [2024-07-15 11:52:08.759809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:00.778 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.769625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.769717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.769763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.769778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.769791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.769822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.779612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.779711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.779760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.779776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.779788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.779818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.789706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.789869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.789896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.789911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.789923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.789954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.799692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.799809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.799835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.799855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.799868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.799898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.809734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.809891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.809918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.809933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.809945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.809975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.819759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.819855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.819880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.819894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.819907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.819936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.829902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.830010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.830049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.830064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.830076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.830106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.839850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.839943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.839967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.839982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.839994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.840038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.849920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.850055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.850091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.038 [2024-07-15 11:52:08.850106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.038 [2024-07-15 11:52:08.850118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.038 [2024-07-15 11:52:08.850147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.038 qpair failed and we were unable to recover it. 00:26:01.038 [2024-07-15 11:52:08.859900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.038 [2024-07-15 11:52:08.860052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.038 [2024-07-15 11:52:08.860078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.860093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.860105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.860134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.869977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.870093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.870119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.870133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.870146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.870174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.879979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.880092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.880118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.880133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.880145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.880174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.889976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.890113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.890143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.890159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.890172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.890201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.900061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.900151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.900176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.900190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.900202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.900231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.910045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.910163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.910189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.910204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.910217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.910246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.920096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.920191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.920215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.920230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.920242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.920271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.930085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.930188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.930213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.930229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.930242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.930277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.940096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.940192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.940216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.940231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.940243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.940271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.950157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.950253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.950291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.950306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.950318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.950348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.960224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.960407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.960434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.960449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.960462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.960493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.039 [2024-07-15 11:52:08.970253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.039 [2024-07-15 11:52:08.970343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.039 [2024-07-15 11:52:08.970367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.039 [2024-07-15 11:52:08.970382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.039 [2024-07-15 11:52:08.970394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.039 [2024-07-15 11:52:08.970423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.039 qpair failed and we were unable to recover it. 00:26:01.040 [2024-07-15 11:52:08.980240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.040 [2024-07-15 11:52:08.980332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.040 [2024-07-15 11:52:08.980362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.040 [2024-07-15 11:52:08.980378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.040 [2024-07-15 11:52:08.980390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.040 [2024-07-15 11:52:08.980418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.040 qpair failed and we were unable to recover it. 00:26:01.040 [2024-07-15 11:52:08.990284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.040 [2024-07-15 11:52:08.990392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.040 [2024-07-15 11:52:08.990416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.040 [2024-07-15 11:52:08.990430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.040 [2024-07-15 11:52:08.990443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.040 [2024-07-15 11:52:08.990473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.040 qpair failed and we were unable to recover it. 00:26:01.040 [2024-07-15 11:52:09.000307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.040 [2024-07-15 11:52:09.000403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.040 [2024-07-15 11:52:09.000427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.040 [2024-07-15 11:52:09.000441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.040 [2024-07-15 11:52:09.000453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.040 [2024-07-15 11:52:09.000482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.040 qpair failed and we were unable to recover it. 00:26:01.040 [2024-07-15 11:52:09.010317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.040 [2024-07-15 11:52:09.010438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.040 [2024-07-15 11:52:09.010464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.040 [2024-07-15 11:52:09.010478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.040 [2024-07-15 11:52:09.010491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.040 [2024-07-15 11:52:09.010520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.040 qpair failed and we were unable to recover it. 00:26:01.040 [2024-07-15 11:52:09.020387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.040 [2024-07-15 11:52:09.020509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.040 [2024-07-15 11:52:09.020536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.040 [2024-07-15 11:52:09.020551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.040 [2024-07-15 11:52:09.020564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.040 [2024-07-15 11:52:09.020602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.040 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.030418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.030525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.030565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.030580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.030592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.299 [2024-07-15 11:52:09.030623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.299 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.040438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.040583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.040609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.040631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.040644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.299 [2024-07-15 11:52:09.040672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.299 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.050478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.050572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.050596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.050610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.050622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.299 [2024-07-15 11:52:09.050650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.299 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.060556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.060658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.060682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.060696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.060708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.299 [2024-07-15 11:52:09.060761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.299 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.070539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.070644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.070667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.070682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.070695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.299 [2024-07-15 11:52:09.070748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.299 qpair failed and we were unable to recover it. 00:26:01.299 [2024-07-15 11:52:09.080576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.299 [2024-07-15 11:52:09.080707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.299 [2024-07-15 11:52:09.080758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.299 [2024-07-15 11:52:09.080775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.299 [2024-07-15 11:52:09.080787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.080817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.090633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.090817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.090842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.090856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.090869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.090900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.100630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.100745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.100770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.100785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.100797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.100827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.110660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.110815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.110840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.110855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.110873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.110904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.120631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.120749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.120774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.120789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.120801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.120831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.130700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.130835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.130860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.130875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.130888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.130917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.140750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.140844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.140867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.140882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.140895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.140926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.150824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.150936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.150961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.150976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.150990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.151035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.160852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.160958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.160984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.160998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.161012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.161042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.170828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.170950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.170974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.170990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.171002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.171047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.180837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.180937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.180961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.180976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.180989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.181043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.190894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.191038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.191064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.191091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.191105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.300 [2024-07-15 11:52:09.191133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.300 qpair failed and we were unable to recover it. 00:26:01.300 [2024-07-15 11:52:09.200960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.300 [2024-07-15 11:52:09.201085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.300 [2024-07-15 11:52:09.201109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.300 [2024-07-15 11:52:09.201128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.300 [2024-07-15 11:52:09.201157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.201188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.210907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.211020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.211047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.211062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.211090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.211119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.220949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.221060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.221084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.221098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.221110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.221139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.231067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.231164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.231188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.231202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.231215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.231244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.241005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.241124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.241147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.241161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.241174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.241203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.251050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.251164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.251190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.251204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.251217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.251245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.261034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.261126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.261152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.261166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.261179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.261207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.271114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.271230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.271254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.271269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.271282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.271310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.301 [2024-07-15 11:52:09.281089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.301 [2024-07-15 11:52:09.281204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.301 [2024-07-15 11:52:09.281230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.301 [2024-07-15 11:52:09.281245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.301 [2024-07-15 11:52:09.281258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.301 [2024-07-15 11:52:09.281288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.301 qpair failed and we were unable to recover it. 00:26:01.560 [2024-07-15 11:52:09.291137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.560 [2024-07-15 11:52:09.291234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.560 [2024-07-15 11:52:09.291265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.560 [2024-07-15 11:52:09.291280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.560 [2024-07-15 11:52:09.291292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.560 [2024-07-15 11:52:09.291322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.560 qpair failed and we were unable to recover it. 00:26:01.560 [2024-07-15 11:52:09.301158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.560 [2024-07-15 11:52:09.301258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.560 [2024-07-15 11:52:09.301282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.560 [2024-07-15 11:52:09.301296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.560 [2024-07-15 11:52:09.301309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.560 [2024-07-15 11:52:09.301337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.560 qpair failed and we were unable to recover it. 00:26:01.560 [2024-07-15 11:52:09.311218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.560 [2024-07-15 11:52:09.311318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.560 [2024-07-15 11:52:09.311341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.560 [2024-07-15 11:52:09.311356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.560 [2024-07-15 11:52:09.311368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.560 [2024-07-15 11:52:09.311398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.560 qpair failed and we were unable to recover it. 00:26:01.560 [2024-07-15 11:52:09.321202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.560 [2024-07-15 11:52:09.321298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.560 [2024-07-15 11:52:09.321322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.560 [2024-07-15 11:52:09.321336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.560 [2024-07-15 11:52:09.321349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.560 [2024-07-15 11:52:09.321377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.560 qpair failed and we were unable to recover it. 00:26:01.560 [2024-07-15 11:52:09.331353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.560 [2024-07-15 11:52:09.331449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.560 [2024-07-15 11:52:09.331473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.331487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.331499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.331532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.341279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.341373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.341396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.341411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.341423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.341452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.351318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.351417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.351442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.351456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.351469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.351498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.361328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.361424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.361449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.361464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.361476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.361505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.371426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.371528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.371551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.371565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.371577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.371606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.381368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.381497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.381528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.381544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.381557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.381586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.391405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.391514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.391540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.391554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.391566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.391595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.401484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.401579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.401603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.401617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.401629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.401658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.411439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.411556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.411581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.411596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.411608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.411636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.421513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.421606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.421631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.421646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.561 [2024-07-15 11:52:09.421659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.561 [2024-07-15 11:52:09.421694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.561 qpair failed and we were unable to recover it. 00:26:01.561 [2024-07-15 11:52:09.431526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.561 [2024-07-15 11:52:09.431624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.561 [2024-07-15 11:52:09.431648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.561 [2024-07-15 11:52:09.431662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.431674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.431703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.441523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.441631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.441657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.441672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.441684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.441712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.451571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.451679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.451704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.451718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.451730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.451768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.461623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.461743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.461770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.461785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.461797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.461828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.471618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.471727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.471766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.471783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.471796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.471826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.481654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.481766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.481791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.481806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.481819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.481849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.491675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.491792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.491818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.491833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.491846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.491875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.501789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.501938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.501965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.501979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.501992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.502022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.511842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.511945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.511971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.511986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.512004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.512049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.521783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.521879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.521903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.562 [2024-07-15 11:52:09.521918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.562 [2024-07-15 11:52:09.521930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.562 [2024-07-15 11:52:09.521960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.562 qpair failed and we were unable to recover it. 00:26:01.562 [2024-07-15 11:52:09.531821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.562 [2024-07-15 11:52:09.531931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.562 [2024-07-15 11:52:09.531958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.563 [2024-07-15 11:52:09.531973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.563 [2024-07-15 11:52:09.531986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.563 [2024-07-15 11:52:09.532016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.563 qpair failed and we were unable to recover it. 00:26:01.563 [2024-07-15 11:52:09.541852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.563 [2024-07-15 11:52:09.541959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.563 [2024-07-15 11:52:09.541986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.563 [2024-07-15 11:52:09.542001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.563 [2024-07-15 11:52:09.542014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.563 [2024-07-15 11:52:09.542044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.563 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.551984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.552084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.552109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.552124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.552138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.552168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.561930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.562030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.562069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.562084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.562096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.562126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.571931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.572038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.572064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.572079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.572091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.572120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.581944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.582055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.582081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.582095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.582108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.582137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.591994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.592108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.592133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.592147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.592159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.592189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.602098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.602196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.821 [2024-07-15 11:52:09.602220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.821 [2024-07-15 11:52:09.602240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.821 [2024-07-15 11:52:09.602253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.821 [2024-07-15 11:52:09.602282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.821 qpair failed and we were unable to recover it. 00:26:01.821 [2024-07-15 11:52:09.612020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.821 [2024-07-15 11:52:09.612120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.612143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.612158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.612170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.612199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.622059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.622172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.622199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.622213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.622225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.622254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.632099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.632219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.632245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.632260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.632272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.632301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.642101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.642193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.642218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.642233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.642245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.642273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.652126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.652216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.652242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.652257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.652269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.652297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.662242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.662338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.662362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.662376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.662389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.662417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.672288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.672392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.672417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.672431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.672444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.672472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.682301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.682411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.682436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.682451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.682463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.682492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.692268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.692388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.692413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.692433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.692446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.692475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.702279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.702388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.702412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.702426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.702438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.702469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.712396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.712495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.712519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.712533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.712546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.712574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.722324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.722416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.722439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.722454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.722466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.722495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.732347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.732446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.732475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.732489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.732502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.732530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.742375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.742469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.742493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.742507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.742519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.742547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.752405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.752502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.752525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.752539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.752551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.752580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.762433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.762525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.762549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.762562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.762575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.762603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.772477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.772586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.772611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.772625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.772637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.772665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.782487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.782575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.782606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.782622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.782634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.782663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.792521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.792619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.792645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.792659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.792671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.792700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:01.822 [2024-07-15 11:52:09.802564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.822 [2024-07-15 11:52:09.802651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.822 [2024-07-15 11:52:09.802676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.822 [2024-07-15 11:52:09.802689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.822 [2024-07-15 11:52:09.802702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:01.822 [2024-07-15 11:52:09.802752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.822 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.812579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.812688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.812714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.812754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.812768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.812799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.822616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.822709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.822756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.822772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.822785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.822820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.832731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.832859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.832885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.832901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.832913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.832943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.842753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.842856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.842882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.842897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.842911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.842941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.852670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.852784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.852810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.852824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.852838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.852867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.862707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.862830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.862856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.862871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.862883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.862913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.872791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.872897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.872929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.872945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.872959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.872989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.882818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.882931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.882957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.882972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.882985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.883014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.892818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.892910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.892934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.892948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.892961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a44000b90 00:26:02.081 [2024-07-15 11:52:09.892991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.902872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.081 [2024-07-15 11:52:09.902976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.081 [2024-07-15 11:52:09.903010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.081 [2024-07-15 11:52:09.903041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.081 [2024-07-15 11:52:09.903054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a54000b90 00:26:02.081 [2024-07-15 11:52:09.903084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:02.081 qpair failed and we were unable to recover it. 00:26:02.081 [2024-07-15 11:52:09.912898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.082 [2024-07-15 11:52:09.913000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.082 [2024-07-15 11:52:09.913028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.082 [2024-07-15 11:52:09.913043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.082 [2024-07-15 11:52:09.913076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3a54000b90 00:26:02.082 [2024-07-15 11:52:09.913107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:02.082 qpair failed and we were unable to recover it. 00:26:02.082 [2024-07-15 11:52:09.913251] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:02.082 A controller has encountered a failure and is being reset. 00:26:02.082 [2024-07-15 11:52:09.913313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4cae0 (9): Bad file descriptor 00:26:02.082 Controller properly reset. 00:26:02.082 Initializing NVMe Controllers 00:26:02.082 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:02.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:02.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:02.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:02.082 Initialization complete. Launching workers. 00:26:02.082 Starting thread on core 1 00:26:02.082 Starting thread on core 2 00:26:02.082 Starting thread on core 3 00:26:02.082 Starting thread on core 0 00:26:02.082 11:52:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:02.082 00:26:02.082 real 0m10.767s 00:26:02.082 user 0m18.771s 00:26:02.082 sys 0m5.606s 00:26:02.082 11:52:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.082 11:52:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:02.082 ************************************ 00:26:02.082 END TEST nvmf_target_disconnect_tc2 00:26:02.082 ************************************ 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:02.082 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:02.082 rmmod nvme_tcp 00:26:02.082 rmmod nvme_fabrics 00:26:02.082 rmmod nvme_keyring 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3133212 ']' 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3133212 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3133212 ']' 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3133212 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3133212 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3133212' 00:26:02.342 killing process with pid 3133212 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3133212 00:26:02.342 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3133212 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.602 11:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.509 11:52:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.509 00:26:04.509 real 0m15.664s 00:26:04.509 user 0m44.939s 00:26:04.509 sys 0m7.541s 00:26:04.509 11:52:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.509 11:52:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:04.509 ************************************ 00:26:04.509 END TEST nvmf_target_disconnect 00:26:04.509 ************************************ 00:26:04.509 11:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:04.509 11:52:12 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:04.509 11:52:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:04.509 11:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 11:52:12 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:04.768 00:26:04.768 real 19m23.393s 00:26:04.768 user 45m43.767s 00:26:04.768 sys 5m4.815s 00:26:04.768 11:52:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.768 11:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 ************************************ 00:26:04.768 END TEST nvmf_tcp 00:26:04.768 ************************************ 00:26:04.768 11:52:12 -- common/autotest_common.sh@1142 -- # return 0 00:26:04.768 11:52:12 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:04.768 11:52:12 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:04.768 11:52:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:04.768 11:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.768 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 ************************************ 00:26:04.768 START TEST spdkcli_nvmf_tcp 00:26:04.768 ************************************ 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:04.768 * Looking for test storage... 00:26:04.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.768 11:52:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3134410 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3134410 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3134410 ']' 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.769 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.769 [2024-07-15 11:52:12.673694] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:04.769 [2024-07-15 11:52:12.673813] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134410 ] 00:26:04.769 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.769 [2024-07-15 11:52:12.731410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:05.027 [2024-07-15 11:52:12.838657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.027 [2024-07-15 11:52:12.838661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.027 11:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:05.027 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:05.027 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:05.027 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:05.027 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:05.027 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:05.027 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:05.027 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:05.027 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:05.027 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:05.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:05.027 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:05.027 ' 00:26:07.559 [2024-07-15 11:52:15.518281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.932 [2024-07-15 11:52:16.738390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:11.458 [2024-07-15 11:52:18.989365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:13.356 [2024-07-15 11:52:20.943449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:14.727 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:14.727 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:14.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:14.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:14.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:14.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:14.727 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:14.727 11:52:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:15.292 11:52:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:15.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:15.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:15.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:15.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:15.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:15.292 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:15.292 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:15.292 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:15.292 ' 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:20.543 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:20.543 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:20.543 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:20.543 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3134410 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3134410 ']' 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3134410 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3134410 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3134410' 00:26:20.543 killing process with pid 3134410 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3134410 00:26:20.543 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3134410 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3134410 ']' 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3134410 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3134410 ']' 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3134410 00:26:20.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3134410) - No such process 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3134410 is not found' 00:26:20.799 Process with pid 3134410 is not found 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:20.799 00:26:20.799 real 0m16.031s 00:26:20.799 user 0m33.850s 00:26:20.799 sys 0m0.790s 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.799 11:52:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.800 ************************************ 00:26:20.800 END TEST spdkcli_nvmf_tcp 00:26:20.800 ************************************ 00:26:20.800 11:52:28 -- common/autotest_common.sh@1142 -- # return 0 00:26:20.800 11:52:28 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:20.800 11:52:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:20.800 11:52:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.800 11:52:28 -- common/autotest_common.sh@10 -- # set +x 00:26:20.800 ************************************ 00:26:20.800 START TEST nvmf_identify_passthru 00:26:20.800 ************************************ 00:26:20.800 11:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:20.800 * Looking for test storage... 00:26:20.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.800 11:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.800 11:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:20.800 11:52:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.800 11:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.800 11:52:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:20.800 11:52:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.800 11:52:28 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.800 11:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:23.374 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:23.374 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.374 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:23.375 Found net devices under 0000:84:00.0: cvl_0_0 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:23.375 Found net devices under 0000:84:00.1: cvl_0_1 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:26:23.375 00:26:23.375 --- 10.0.0.2 ping statistics --- 00:26:23.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.375 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:26:23.375 00:26:23.375 --- 10.0.0.1 ping statistics --- 00:26:23.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.375 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.375 11:52:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.375 11:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:23.375 11:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:23.375 11:52:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:23.375 11:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:23.375 11:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:26:23.375 11:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:26:23.375 11:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:26:23.375 11:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:26:23.375 11:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:26:23.375 11:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:23.375 11:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:23.375 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.558 11:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:26:27.558 11:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:26:27.558 11:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:27.558 11:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:27.558 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3138935 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3138935 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3138935 ']' 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.741 [2024-07-15 11:52:39.496986] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:31.741 [2024-07-15 11:52:39.497103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.741 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.741 [2024-07-15 11:52:39.562549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.741 [2024-07-15 11:52:39.671854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.741 [2024-07-15 11:52:39.671910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.741 [2024-07-15 11:52:39.671924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.741 [2024-07-15 11:52:39.671936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.741 [2024-07-15 11:52:39.671945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.741 [2024-07-15 11:52:39.671994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.741 [2024-07-15 11:52:39.672053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.741 [2024-07-15 11:52:39.672126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.741 [2024-07-15 11:52:39.672130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.741 INFO: Log level set to 20 00:26:31.741 INFO: Requests: 00:26:31.741 { 00:26:31.741 "jsonrpc": "2.0", 00:26:31.741 "method": "nvmf_set_config", 00:26:31.741 "id": 1, 00:26:31.741 "params": { 00:26:31.741 "admin_cmd_passthru": { 00:26:31.741 "identify_ctrlr": true 00:26:31.741 } 00:26:31.741 } 00:26:31.741 } 00:26:31.741 00:26:31.741 INFO: response: 00:26:31.741 { 00:26:31.741 "jsonrpc": "2.0", 00:26:31.741 "id": 1, 00:26:31.741 "result": true 00:26:31.741 } 00:26:31.741 00:26:31.741 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.741 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:31.742 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.742 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.742 INFO: Setting log level to 20 00:26:31.742 INFO: Setting log level to 20 00:26:31.742 INFO: Log level set to 20 00:26:31.742 INFO: Log level set to 20 00:26:31.742 INFO: Requests: 00:26:31.742 { 00:26:31.742 "jsonrpc": "2.0", 00:26:31.742 "method": "framework_start_init", 00:26:31.742 "id": 1 00:26:31.742 } 00:26:31.742 00:26:31.742 INFO: Requests: 00:26:31.742 { 00:26:31.742 "jsonrpc": "2.0", 00:26:31.742 "method": "framework_start_init", 00:26:31.742 "id": 1 00:26:31.742 } 00:26:31.742 00:26:31.998 [2024-07-15 11:52:39.824097] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:31.998 INFO: response: 00:26:31.998 { 00:26:31.998 "jsonrpc": "2.0", 00:26:31.998 "id": 1, 00:26:31.998 "result": true 00:26:31.998 } 00:26:31.998 00:26:31.998 INFO: response: 00:26:31.998 { 00:26:31.998 "jsonrpc": "2.0", 00:26:31.998 "id": 1, 00:26:31.998 "result": true 00:26:31.998 } 00:26:31.998 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.998 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.998 INFO: Setting log level to 40 00:26:31.998 INFO: Setting log level to 40 00:26:31.998 INFO: Setting log level to 40 00:26:31.998 [2024-07-15 11:52:39.834316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.998 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.998 11:52:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:26:31.998 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.999 11:52:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 Nvme0n1 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 [2024-07-15 11:52:42.733198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 [ 00:26:35.273 { 00:26:35.273 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:35.273 "subtype": "Discovery", 00:26:35.273 "listen_addresses": [], 00:26:35.273 "allow_any_host": true, 00:26:35.273 "hosts": [] 00:26:35.273 }, 00:26:35.273 { 00:26:35.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.273 "subtype": "NVMe", 00:26:35.273 "listen_addresses": [ 00:26:35.273 { 00:26:35.273 "trtype": "TCP", 00:26:35.273 "adrfam": "IPv4", 00:26:35.273 "traddr": "10.0.0.2", 00:26:35.273 "trsvcid": "4420" 00:26:35.273 } 00:26:35.273 ], 00:26:35.273 "allow_any_host": true, 00:26:35.273 "hosts": [], 00:26:35.273 "serial_number": "SPDK00000000000001", 00:26:35.273 "model_number": "SPDK bdev Controller", 00:26:35.273 "max_namespaces": 1, 00:26:35.273 "min_cntlid": 1, 00:26:35.273 "max_cntlid": 65519, 00:26:35.273 "namespaces": [ 00:26:35.273 { 00:26:35.273 "nsid": 1, 00:26:35.273 "bdev_name": "Nvme0n1", 00:26:35.273 "name": "Nvme0n1", 00:26:35.273 "nguid": "8E767D87E6E84BBBB32EB68C4BA30E8E", 00:26:35.273 "uuid": "8e767d87-e6e8-4bbb-b32e-b68c4ba30e8e" 00:26:35.273 } 00:26:35.273 ] 00:26:35.273 } 00:26:35.273 ] 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:35.273 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:35.273 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:35.273 11:52:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:35.273 11:52:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.273 11:52:42 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.273 rmmod nvme_tcp 00:26:35.273 rmmod nvme_fabrics 00:26:35.273 rmmod nvme_keyring 00:26:35.273 11:52:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.273 11:52:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:35.273 11:52:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:35.273 11:52:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3138935 ']' 00:26:35.273 11:52:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3138935 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3138935 ']' 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3138935 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3138935 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3138935' 00:26:35.273 killing process with pid 3138935 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3138935 00:26:35.273 11:52:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3138935 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.175 11:52:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.175 11:52:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:37.175 11:52:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.074 11:52:46 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.074 00:26:39.074 real 0m18.071s 00:26:39.074 user 0m26.427s 00:26:39.074 sys 0m2.323s 00:26:39.074 11:52:46 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.074 11:52:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:39.074 ************************************ 00:26:39.074 END TEST nvmf_identify_passthru 00:26:39.074 ************************************ 00:26:39.074 11:52:46 -- common/autotest_common.sh@1142 -- # return 0 00:26:39.074 11:52:46 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:39.074 11:52:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.074 11:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.074 11:52:46 -- common/autotest_common.sh@10 -- # set +x 00:26:39.074 ************************************ 00:26:39.074 START TEST nvmf_dif 00:26:39.074 ************************************ 00:26:39.074 11:52:46 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:39.074 * Looking for test storage... 00:26:39.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:39.074 11:52:46 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.074 11:52:46 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.074 11:52:46 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.074 11:52:46 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.074 11:52:46 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.074 11:52:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.074 11:52:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 11:52:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 11:52:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:39.075 11:52:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.075 11:52:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:39.075 11:52:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:39.075 11:52:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:39.075 11:52:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:39.075 11:52:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.075 11:52:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:39.075 11:52:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.075 11:52:46 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.075 11:52:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.978 11:52:48 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:40.979 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:40.979 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:40.979 Found net devices under 0000:84:00.0: cvl_0_0 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:40.979 Found net devices under 0000:84:00.1: cvl_0_1 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.979 11:52:48 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:26:41.237 00:26:41.237 --- 10.0.0.2 ping statistics --- 00:26:41.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.237 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:26:41.237 00:26:41.237 --- 10.0.0.1 ping statistics --- 00:26:41.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.237 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:41.237 11:52:48 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:42.173 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:42.173 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:42.173 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:42.173 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:42.173 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:42.173 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:42.173 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:42.173 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:42.173 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:42.173 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:42.173 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:42.173 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:42.173 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:42.173 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:42.173 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:42.173 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:42.173 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.432 11:52:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:42.432 11:52:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3142214 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:42.432 11:52:50 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3142214 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3142214 ']' 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.432 11:52:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.432 [2024-07-15 11:52:50.376023] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:42.432 [2024-07-15 11:52:50.376108] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.432 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.690 [2024-07-15 11:52:50.442844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.690 [2024-07-15 11:52:50.546780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.690 [2024-07-15 11:52:50.546850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.690 [2024-07-15 11:52:50.546871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.690 [2024-07-15 11:52:50.546882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.690 [2024-07-15 11:52:50.546890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.690 [2024-07-15 11:52:50.546920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:42.690 11:52:50 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.690 11:52:50 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.690 11:52:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:42.690 11:52:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.690 [2024-07-15 11:52:50.672130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.690 11:52:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.690 11:52:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 ************************************ 00:26:42.948 START TEST fio_dif_1_default 00:26:42.948 ************************************ 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 bdev_null0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 [2024-07-15 11:52:50.728365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.948 { 00:26:42.948 "params": { 00:26:42.948 "name": "Nvme$subsystem", 00:26:42.948 "trtype": "$TEST_TRANSPORT", 00:26:42.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.948 "adrfam": "ipv4", 00:26:42.948 "trsvcid": "$NVMF_PORT", 00:26:42.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.948 "hdgst": ${hdgst:-false}, 00:26:42.948 "ddgst": ${ddgst:-false} 00:26:42.948 }, 00:26:42.948 "method": "bdev_nvme_attach_controller" 00:26:42.948 } 00:26:42.948 EOF 00:26:42.948 )") 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:42.948 11:52:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:42.948 "params": { 00:26:42.948 "name": "Nvme0", 00:26:42.948 "trtype": "tcp", 00:26:42.948 "traddr": "10.0.0.2", 00:26:42.948 "adrfam": "ipv4", 00:26:42.948 "trsvcid": "4420", 00:26:42.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:42.949 "hdgst": false, 00:26:42.949 "ddgst": false 00:26:42.949 }, 00:26:42.949 "method": "bdev_nvme_attach_controller" 00:26:42.949 }' 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:42.949 11:52:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:43.206 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:43.206 fio-3.35 00:26:43.206 Starting 1 thread 00:26:43.206 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.420 00:26:55.420 filename0: (groupid=0, jobs=1): err= 0: pid=3142441: Mon Jul 15 11:53:01 2024 00:26:55.420 read: IOPS=188, BW=753KiB/s (772kB/s)(7552KiB/10023msec) 00:26:55.420 slat (nsec): min=7119, max=63980, avg=9196.76, stdev=3747.45 00:26:55.420 clat (usec): min=555, max=46371, avg=21205.06, stdev=20434.44 00:26:55.420 lat (usec): min=562, max=46400, avg=21214.26, stdev=20434.69 00:26:55.420 clat percentiles (usec): 00:26:55.420 | 1.00th=[ 586], 5.00th=[ 603], 10.00th=[ 635], 20.00th=[ 701], 00:26:55.420 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[41157], 60.00th=[41157], 00:26:55.420 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:55.420 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:26:55.420 | 99.99th=[46400] 00:26:55.420 bw ( KiB/s): min= 672, max= 768, per=99.94%, avg=753.60, stdev=30.22, samples=20 00:26:55.420 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:26:55.420 lat (usec) : 750=45.39%, 1000=4.40% 00:26:55.420 lat (msec) : 50=50.21% 00:26:55.420 cpu : usr=89.87%, sys=9.82%, ctx=26, majf=0, minf=260 00:26:55.420 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.420 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.420 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:55.420 00:26:55.420 Run status group 0 (all jobs): 00:26:55.420 READ: bw=753KiB/s (772kB/s), 753KiB/s-753KiB/s (772kB/s-772kB/s), io=7552KiB (7733kB), run=10023-10023msec 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 00:26:55.420 real 0m11.106s 00:26:55.420 user 0m10.128s 00:26:55.420 sys 0m1.256s 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 ************************************ 00:26:55.420 END TEST fio_dif_1_default 00:26:55.420 ************************************ 00:26:55.420 11:53:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:55.420 11:53:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:55.420 11:53:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:55.420 11:53:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 ************************************ 00:26:55.420 START TEST fio_dif_1_multi_subsystems 00:26:55.420 ************************************ 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 bdev_null0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 [2024-07-15 11:53:01.889127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 bdev_null1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.420 { 00:26:55.420 "params": { 00:26:55.420 "name": "Nvme$subsystem", 00:26:55.420 "trtype": "$TEST_TRANSPORT", 00:26:55.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.420 "adrfam": "ipv4", 00:26:55.420 "trsvcid": "$NVMF_PORT", 00:26:55.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.420 "hdgst": ${hdgst:-false}, 00:26:55.420 "ddgst": ${ddgst:-false} 00:26:55.420 }, 00:26:55.420 "method": "bdev_nvme_attach_controller" 00:26:55.420 } 00:26:55.420 EOF 00:26:55.420 )") 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.420 { 00:26:55.420 "params": { 00:26:55.420 "name": "Nvme$subsystem", 00:26:55.420 "trtype": "$TEST_TRANSPORT", 00:26:55.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.420 "adrfam": "ipv4", 00:26:55.420 "trsvcid": "$NVMF_PORT", 00:26:55.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.420 "hdgst": ${hdgst:-false}, 00:26:55.420 "ddgst": ${ddgst:-false} 00:26:55.420 }, 00:26:55.420 "method": "bdev_nvme_attach_controller" 00:26:55.420 } 00:26:55.420 EOF 00:26:55.420 )") 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:55.420 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.420 "params": { 00:26:55.420 "name": "Nvme0", 00:26:55.421 "trtype": "tcp", 00:26:55.421 "traddr": "10.0.0.2", 00:26:55.421 "adrfam": "ipv4", 00:26:55.421 "trsvcid": "4420", 00:26:55.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:55.421 "hdgst": false, 00:26:55.421 "ddgst": false 00:26:55.421 }, 00:26:55.421 "method": "bdev_nvme_attach_controller" 00:26:55.421 },{ 00:26:55.421 "params": { 00:26:55.421 "name": "Nvme1", 00:26:55.421 "trtype": "tcp", 00:26:55.421 "traddr": "10.0.0.2", 00:26:55.421 "adrfam": "ipv4", 00:26:55.421 "trsvcid": "4420", 00:26:55.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.421 "hdgst": false, 00:26:55.421 "ddgst": false 00:26:55.421 }, 00:26:55.421 "method": "bdev_nvme_attach_controller" 00:26:55.421 }' 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:55.421 11:53:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.421 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:55.421 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:55.421 fio-3.35 00:26:55.421 Starting 2 threads 00:26:55.421 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.382 00:27:05.382 filename0: (groupid=0, jobs=1): err= 0: pid=3143849: Mon Jul 15 11:53:12 2024 00:27:05.382 read: IOPS=187, BW=751KiB/s (770kB/s)(7520KiB/10007msec) 00:27:05.382 slat (nsec): min=7010, max=47129, avg=9505.08, stdev=2991.93 00:27:05.382 clat (usec): min=524, max=43453, avg=21261.11, stdev=20522.94 00:27:05.382 lat (usec): min=532, max=43485, avg=21270.61, stdev=20522.84 00:27:05.382 clat percentiles (usec): 00:27:05.382 | 1.00th=[ 570], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 627], 00:27:05.382 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[41157], 60.00th=[41157], 00:27:05.382 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:27:05.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:27:05.382 | 99.99th=[43254] 00:27:05.382 bw ( KiB/s): min= 672, max= 768, per=50.22%, avg=750.40, stdev=31.96, samples=20 00:27:05.382 iops : min= 168, max= 192, avg=187.60, stdev= 7.99, samples=20 00:27:05.382 lat (usec) : 750=46.22%, 1000=3.09% 00:27:05.382 lat (msec) : 2=0.48%, 50=50.21% 00:27:05.382 cpu : usr=94.30%, sys=5.35%, ctx=32, majf=0, minf=57 00:27:05.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.382 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.382 filename1: (groupid=0, jobs=1): err= 0: pid=3143850: Mon Jul 15 11:53:12 2024 00:27:05.382 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10002msec) 00:27:05.382 slat (nsec): min=6447, max=72210, avg=9103.01, stdev=2991.68 00:27:05.382 clat (usec): min=509, max=42447, avg=21526.42, stdev=20665.56 00:27:05.382 lat (usec): min=516, max=42459, avg=21535.52, stdev=20665.40 00:27:05.382 clat percentiles (usec): 00:27:05.382 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 578], 00:27:05.382 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[41157], 60.00th=[41157], 00:27:05.382 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:05.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:05.382 | 99.99th=[42206] 00:27:05.382 bw ( KiB/s): min= 608, max= 768, per=49.69%, avg=742.74, stdev=42.10, samples=19 00:27:05.382 iops : min= 152, max= 192, avg=185.68, stdev=10.53, samples=19 00:27:05.382 lat (usec) : 750=46.61%, 1000=2.75% 00:27:05.382 lat (msec) : 50=50.65% 00:27:05.382 cpu : usr=94.08%, sys=5.52%, ctx=27, majf=0, minf=204 00:27:05.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.382 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.382 00:27:05.382 Run status group 0 (all jobs): 00:27:05.382 READ: bw=1493KiB/s (1529kB/s), 742KiB/s-751KiB/s (760kB/s-770kB/s), io=14.6MiB (15.3MB), run=10002-10007msec 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 00:27:05.382 real 0m11.328s 00:27:05.382 user 0m20.164s 00:27:05.382 sys 0m1.351s 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 ************************************ 00:27:05.382 END TEST fio_dif_1_multi_subsystems 00:27:05.382 ************************************ 00:27:05.382 11:53:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:05.382 11:53:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:05.382 11:53:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:05.382 11:53:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 ************************************ 00:27:05.382 START TEST fio_dif_rand_params 00:27:05.382 ************************************ 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 bdev_null0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.382 [2024-07-15 11:53:13.268353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:05.382 { 00:27:05.382 "params": { 00:27:05.382 "name": "Nvme$subsystem", 00:27:05.382 "trtype": "$TEST_TRANSPORT", 00:27:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.382 "adrfam": "ipv4", 00:27:05.382 "trsvcid": "$NVMF_PORT", 00:27:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.382 "hdgst": ${hdgst:-false}, 00:27:05.382 "ddgst": ${ddgst:-false} 00:27:05.382 }, 00:27:05.382 "method": "bdev_nvme_attach_controller" 00:27:05.382 } 00:27:05.382 EOF 00:27:05.382 )") 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:05.382 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:05.383 "params": { 00:27:05.383 "name": "Nvme0", 00:27:05.383 "trtype": "tcp", 00:27:05.383 "traddr": "10.0.0.2", 00:27:05.383 "adrfam": "ipv4", 00:27:05.383 "trsvcid": "4420", 00:27:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.383 "hdgst": false, 00:27:05.383 "ddgst": false 00:27:05.383 }, 00:27:05.383 "method": "bdev_nvme_attach_controller" 00:27:05.383 }' 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:05.383 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.642 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:05.642 ... 00:27:05.642 fio-3.35 00:27:05.642 Starting 3 threads 00:27:05.642 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.194 00:27:12.194 filename0: (groupid=0, jobs=1): err= 0: pid=3145250: Mon Jul 15 11:53:19 2024 00:27:12.194 read: IOPS=255, BW=32.0MiB/s (33.6MB/s)(161MiB/5043msec) 00:27:12.194 slat (nsec): min=8098, max=51420, avg=16526.69, stdev=5151.16 00:27:12.194 clat (usec): min=4366, max=56057, avg=11665.91, stdev=5075.30 00:27:12.194 lat (usec): min=4382, max=56078, avg=11682.43, stdev=5075.02 00:27:12.194 clat percentiles (usec): 00:27:12.194 | 1.00th=[ 5211], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9634], 00:27:12.194 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:27:12.194 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13698], 95.00th=[14484], 00:27:12.194 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:27:12.194 | 99.99th=[55837] 00:27:12.194 bw ( KiB/s): min=29498, max=35584, per=33.99%, avg=32978.60, stdev=1784.66, samples=10 00:27:12.194 iops : min= 230, max= 278, avg=257.60, stdev=14.04, samples=10 00:27:12.194 lat (msec) : 10=24.48%, 20=74.21%, 50=0.23%, 100=1.08% 00:27:12.194 cpu : usr=84.25%, sys=12.24%, ctx=639, majf=0, minf=84 00:27:12.194 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 issued rwts: total=1291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.194 filename0: (groupid=0, jobs=1): err= 0: pid=3145251: Mon Jul 15 11:53:19 2024 00:27:12.194 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5004msec) 00:27:12.194 slat (nsec): min=6241, max=61574, avg=16903.94, stdev=5028.04 00:27:12.194 clat (usec): min=4657, max=52071, avg=11855.27, stdev=3951.21 00:27:12.194 lat (usec): min=4671, max=52086, avg=11872.17, stdev=3951.46 00:27:12.194 clat percentiles (usec): 00:27:12.194 | 1.00th=[ 6521], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 9896], 00:27:12.194 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:27:12.194 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14484], 95.00th=[15139], 00:27:12.194 | 99.00th=[16909], 99.50th=[50070], 99.90th=[51119], 99.95th=[52167], 00:27:12.194 | 99.99th=[52167] 00:27:12.194 bw ( KiB/s): min=28928, max=38912, per=33.27%, avg=32281.60, stdev=2763.67, samples=10 00:27:12.194 iops : min= 226, max= 304, avg=252.20, stdev=21.59, samples=10 00:27:12.194 lat (msec) : 10=21.99%, 20=77.29%, 50=0.32%, 100=0.40% 00:27:12.194 cpu : usr=84.09%, sys=12.23%, ctx=372, majf=0, minf=108 00:27:12.194 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.194 filename0: (groupid=0, jobs=1): err= 0: pid=3145252: Mon Jul 15 11:53:19 2024 00:27:12.194 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(159MiB/5047msec) 00:27:12.194 slat (nsec): min=6013, max=29126, avg=13556.49, stdev=1897.44 00:27:12.194 clat (usec): min=5294, max=53430, avg=11858.81, stdev=6233.22 00:27:12.194 lat (usec): min=5304, max=53443, avg=11872.36, stdev=6233.11 00:27:12.194 clat percentiles (usec): 00:27:12.194 | 1.00th=[ 6456], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9634], 00:27:12.194 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:27:12.194 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13435], 95.00th=[14615], 00:27:12.194 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:27:12.194 | 99.99th=[53216] 00:27:12.194 bw ( KiB/s): min=27392, max=35584, per=33.48%, avg=32486.40, stdev=2884.85, samples=10 00:27:12.194 iops : min= 214, max= 278, avg=253.80, stdev=22.54, samples=10 00:27:12.194 lat (msec) : 10=28.09%, 20=69.63%, 50=0.39%, 100=1.89% 00:27:12.194 cpu : usr=92.39%, sys=7.09%, ctx=14, majf=0, minf=89 00:27:12.194 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.194 issued rwts: total=1271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.194 00:27:12.194 Run status group 0 (all jobs): 00:27:12.194 READ: bw=94.8MiB/s (99.4MB/s), 31.5MiB/s-32.0MiB/s (33.0MB/s-33.6MB/s), io=478MiB (501MB), run=5004-5047msec 00:27:12.194 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:12.194 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:12.194 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:12.194 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 bdev_null0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 [2024-07-15 11:53:19.630248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 bdev_null1 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 bdev_null2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.195 { 00:27:12.195 "params": { 00:27:12.195 "name": "Nvme$subsystem", 00:27:12.195 "trtype": "$TEST_TRANSPORT", 00:27:12.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.195 "adrfam": "ipv4", 00:27:12.195 "trsvcid": "$NVMF_PORT", 00:27:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.195 "hdgst": ${hdgst:-false}, 00:27:12.195 "ddgst": ${ddgst:-false} 00:27:12.195 }, 00:27:12.195 "method": "bdev_nvme_attach_controller" 00:27:12.195 } 00:27:12.195 EOF 00:27:12.195 )") 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.195 { 00:27:12.195 "params": { 00:27:12.195 "name": "Nvme$subsystem", 00:27:12.195 "trtype": "$TEST_TRANSPORT", 00:27:12.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.195 "adrfam": "ipv4", 00:27:12.195 "trsvcid": "$NVMF_PORT", 00:27:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.195 "hdgst": ${hdgst:-false}, 00:27:12.195 "ddgst": ${ddgst:-false} 00:27:12.195 }, 00:27:12.195 "method": "bdev_nvme_attach_controller" 00:27:12.195 } 00:27:12.195 EOF 00:27:12.195 )") 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.195 { 00:27:12.195 "params": { 00:27:12.195 "name": "Nvme$subsystem", 00:27:12.195 "trtype": "$TEST_TRANSPORT", 00:27:12.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.195 "adrfam": "ipv4", 00:27:12.195 "trsvcid": "$NVMF_PORT", 00:27:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.195 "hdgst": ${hdgst:-false}, 00:27:12.195 "ddgst": ${ddgst:-false} 00:27:12.195 }, 00:27:12.195 "method": "bdev_nvme_attach_controller" 00:27:12.195 } 00:27:12.195 EOF 00:27:12.195 )") 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:12.195 11:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:12.195 "params": { 00:27:12.195 "name": "Nvme0", 00:27:12.195 "trtype": "tcp", 00:27:12.195 "traddr": "10.0.0.2", 00:27:12.195 "adrfam": "ipv4", 00:27:12.195 "trsvcid": "4420", 00:27:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:12.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:12.195 "hdgst": false, 00:27:12.195 "ddgst": false 00:27:12.195 }, 00:27:12.195 "method": "bdev_nvme_attach_controller" 00:27:12.195 },{ 00:27:12.195 "params": { 00:27:12.195 "name": "Nvme1", 00:27:12.195 "trtype": "tcp", 00:27:12.195 "traddr": "10.0.0.2", 00:27:12.195 "adrfam": "ipv4", 00:27:12.195 "trsvcid": "4420", 00:27:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.195 "hdgst": false, 00:27:12.195 "ddgst": false 00:27:12.195 }, 00:27:12.195 "method": "bdev_nvme_attach_controller" 00:27:12.195 },{ 00:27:12.196 "params": { 00:27:12.196 "name": "Nvme2", 00:27:12.196 "trtype": "tcp", 00:27:12.196 "traddr": "10.0.0.2", 00:27:12.196 "adrfam": "ipv4", 00:27:12.196 "trsvcid": "4420", 00:27:12.196 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:12.196 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.196 "hdgst": false, 00:27:12.196 "ddgst": false 00:27:12.196 }, 00:27:12.196 "method": "bdev_nvme_attach_controller" 00:27:12.196 }' 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:12.196 11:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.196 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.196 ... 00:27:12.196 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.196 ... 00:27:12.196 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:12.196 ... 00:27:12.196 fio-3.35 00:27:12.196 Starting 24 threads 00:27:12.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.471 00:27:24.471 filename0: (groupid=0, jobs=1): err= 0: pid=3146114: Mon Jul 15 11:53:31 2024 00:27:24.471 read: IOPS=60, BW=242KiB/s (248kB/s)(2456KiB/10145msec) 00:27:24.471 slat (nsec): min=4865, max=50265, avg=11888.79, stdev=5071.21 00:27:24.471 clat (msec): min=165, max=513, avg=263.43, stdev=52.46 00:27:24.471 lat (msec): min=165, max=513, avg=263.44, stdev=52.46 00:27:24.471 clat percentiles (msec): 00:27:24.471 | 1.00th=[ 165], 5.00th=[ 213], 10.00th=[ 228], 20.00th=[ 247], 00:27:24.471 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.471 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 397], 00:27:24.471 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:27:24.471 | 99.99th=[ 514] 00:27:24.471 bw ( KiB/s): min= 144, max= 384, per=4.47%, avg=251.79, stdev=42.28, samples=19 00:27:24.471 iops : min= 36, max= 96, avg=62.95, stdev=10.57, samples=19 00:27:24.471 lat (msec) : 250=30.29%, 500=67.10%, 750=2.61% 00:27:24.471 cpu : usr=98.42%, sys=1.14%, ctx=17, majf=0, minf=58 00:27:24.471 IO depths : 1=1.6%, 2=3.7%, 4=12.5%, 8=71.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:27:24.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.471 complete : 0=0.0%, 4=90.5%, 8=4.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.471 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.471 filename0: (groupid=0, jobs=1): err= 0: pid=3146115: Mon Jul 15 11:53:31 2024 00:27:24.471 read: IOPS=61, BW=245KiB/s (251kB/s)(2488KiB/10150msec) 00:27:24.471 slat (usec): min=4, max=103, avg=15.38, stdev= 9.59 00:27:24.471 clat (msec): min=165, max=573, avg=260.66, stdev=50.29 00:27:24.471 lat (msec): min=165, max=573, avg=260.67, stdev=50.29 00:27:24.471 clat percentiles (msec): 00:27:24.471 | 1.00th=[ 165], 5.00th=[ 220], 10.00th=[ 241], 20.00th=[ 247], 00:27:24.471 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.471 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 266], 95.00th=[ 313], 00:27:24.471 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 575], 99.95th=[ 575], 00:27:24.471 | 99.99th=[ 575] 00:27:24.471 bw ( KiB/s): min= 144, max= 368, per=4.54%, avg=255.16, stdev=37.51, samples=19 00:27:24.471 iops : min= 36, max= 92, avg=63.79, stdev= 9.38, samples=19 00:27:24.471 lat (msec) : 250=33.12%, 500=64.31%, 750=2.57% 00:27:24.471 cpu : usr=97.65%, sys=1.61%, ctx=131, majf=0, minf=68 00:27:24.471 IO depths : 1=0.2%, 2=6.4%, 4=25.1%, 8=56.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:24.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.471 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.471 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.471 filename0: (groupid=0, jobs=1): err= 0: pid=3146116: Mon Jul 15 11:53:31 2024 00:27:24.471 read: IOPS=55, BW=223KiB/s (229kB/s)(2264KiB/10143msec) 00:27:24.471 slat (nsec): min=7622, max=96283, avg=20582.45, stdev=21183.64 00:27:24.471 clat (msec): min=195, max=591, avg=286.04, stdev=70.99 00:27:24.471 lat (msec): min=195, max=591, avg=286.06, stdev=71.00 00:27:24.471 clat percentiles (msec): 00:27:24.471 | 1.00th=[ 197], 5.00th=[ 230], 10.00th=[ 247], 20.00th=[ 251], 00:27:24.471 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 264], 00:27:24.471 | 70.00th=[ 266], 80.00th=[ 309], 90.00th=[ 388], 95.00th=[ 405], 00:27:24.471 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:27:24.471 | 99.99th=[ 592] 00:27:24.471 bw ( KiB/s): min= 128, max= 256, per=4.11%, avg=231.58, stdev=43.78, samples=19 00:27:24.471 iops : min= 32, max= 64, avg=57.89, stdev=10.94, samples=19 00:27:24.471 lat (msec) : 250=18.37%, 500=78.80%, 750=2.83% 00:27:24.471 cpu : usr=98.21%, sys=1.19%, ctx=21, majf=0, minf=57 00:27:24.471 IO depths : 1=1.8%, 2=4.6%, 4=14.7%, 8=68.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=91.1%, 8=3.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename0: (groupid=0, jobs=1): err= 0: pid=3146117: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=68, BW=273KiB/s (279kB/s)(2776KiB/10176msec) 00:27:24.472 slat (nsec): min=6248, max=59106, avg=13073.84, stdev=6425.93 00:27:24.472 clat (msec): min=16, max=303, avg=233.06, stdev=60.87 00:27:24.472 lat (msec): min=16, max=303, avg=233.07, stdev=60.87 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 17], 5.00th=[ 68], 10.00th=[ 130], 20.00th=[ 241], 00:27:24.472 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 257], 00:27:24.472 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 266], 95.00th=[ 266], 00:27:24.472 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:27:24.472 | 99.99th=[ 305] 00:27:24.472 bw ( KiB/s): min= 176, max= 640, per=4.82%, avg=271.20, stdev=88.63, samples=20 00:27:24.472 iops : min= 44, max= 160, avg=67.80, stdev=22.16, samples=20 00:27:24.472 lat (msec) : 20=2.31%, 50=2.31%, 100=2.31%, 250=34.01%, 500=59.08% 00:27:24.472 cpu : usr=97.50%, sys=1.79%, ctx=67, majf=0, minf=40 00:27:24.472 IO depths : 1=0.4%, 2=1.6%, 4=9.7%, 8=76.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=89.7%, 8=4.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename0: (groupid=0, jobs=1): err= 0: pid=3146118: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=42, BW=170KiB/s (174kB/s)(1728KiB/10143msec) 00:27:24.472 slat (nsec): min=5164, max=51580, avg=13819.31, stdev=7493.69 00:27:24.472 clat (msec): min=246, max=593, avg=375.51, stdev=73.78 00:27:24.472 lat (msec): min=246, max=593, avg=375.52, stdev=73.78 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 259], 5.00th=[ 262], 10.00th=[ 271], 20.00th=[ 334], 00:27:24.472 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 405], 00:27:24.472 | 70.00th=[ 414], 80.00th=[ 435], 90.00th=[ 443], 95.00th=[ 527], 00:27:24.472 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:27:24.472 | 99.99th=[ 592] 00:27:24.472 bw ( KiB/s): min= 128, max= 256, per=3.11%, avg=175.16, stdev=60.22, samples=19 00:27:24.472 iops : min= 32, max= 64, avg=43.79, stdev=15.05, samples=19 00:27:24.472 lat (msec) : 250=0.46%, 500=93.98%, 750=5.56% 00:27:24.472 cpu : usr=97.96%, sys=1.44%, ctx=35, majf=0, minf=54 00:27:24.472 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename0: (groupid=0, jobs=1): err= 0: pid=3146119: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=63, BW=252KiB/s (258kB/s)(2568KiB/10173msec) 00:27:24.472 slat (nsec): min=6070, max=86712, avg=11671.52, stdev=7117.51 00:27:24.472 clat (msec): min=48, max=401, avg=251.92, stdev=58.70 00:27:24.472 lat (msec): min=48, max=401, avg=251.93, stdev=58.70 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 49], 5.00th=[ 163], 10.00th=[ 190], 20.00th=[ 232], 00:27:24.472 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:27:24.472 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 309], 95.00th=[ 372], 00:27:24.472 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:27:24.472 | 99.99th=[ 401] 00:27:24.472 bw ( KiB/s): min= 176, max= 384, per=4.45%, avg=250.40, stdev=37.89, samples=20 00:27:24.472 iops : min= 44, max= 96, avg=62.60, stdev= 9.47, samples=20 00:27:24.472 lat (msec) : 50=2.49%, 100=2.49%, 250=37.38%, 500=57.63% 00:27:24.472 cpu : usr=97.87%, sys=1.56%, ctx=45, majf=0, minf=56 00:27:24.472 IO depths : 1=0.6%, 2=1.4%, 4=7.9%, 8=77.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=89.1%, 8=6.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename0: (groupid=0, jobs=1): err= 0: pid=3146120: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=59, BW=239KiB/s (244kB/s)(2424KiB/10158msec) 00:27:24.472 slat (usec): min=4, max=131, avg=19.33, stdev=18.23 00:27:24.472 clat (msec): min=184, max=460, avg=267.76, stdev=34.76 00:27:24.472 lat (msec): min=184, max=460, avg=267.78, stdev=34.77 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 186], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 251], 00:27:24.472 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 262], 60.00th=[ 264], 00:27:24.472 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 330], 95.00th=[ 359], 00:27:24.472 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 460], 99.95th=[ 460], 00:27:24.472 | 99.99th=[ 460] 00:27:24.472 bw ( KiB/s): min= 128, max= 256, per=4.18%, avg=236.00, stdev=42.45, samples=20 00:27:24.472 iops : min= 32, max= 64, avg=59.00, stdev=10.61, samples=20 00:27:24.472 lat (msec) : 250=21.78%, 500=78.22% 00:27:24.472 cpu : usr=98.31%, sys=1.14%, ctx=36, majf=0, minf=78 00:27:24.472 IO depths : 1=1.7%, 2=7.9%, 4=25.1%, 8=54.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename0: (groupid=0, jobs=1): err= 0: pid=3146121: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=58, BW=232KiB/s (238kB/s)(2360KiB/10157msec) 00:27:24.472 slat (usec): min=8, max=106, avg=21.19, stdev=21.82 00:27:24.472 clat (msec): min=184, max=501, avg=275.12, stdev=46.79 00:27:24.472 lat (msec): min=184, max=501, avg=275.14, stdev=46.81 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 186], 5.00th=[ 220], 10.00th=[ 245], 20.00th=[ 253], 00:27:24.472 | 30.00th=[ 257], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 266], 00:27:24.472 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 342], 95.00th=[ 401], 00:27:24.472 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 502], 99.95th=[ 502], 00:27:24.472 | 99.99th=[ 502] 00:27:24.472 bw ( KiB/s): min= 128, max= 256, per=4.07%, avg=229.60, stdev=50.40, samples=20 00:27:24.472 iops : min= 32, max= 64, avg=57.40, stdev=12.60, samples=20 00:27:24.472 lat (msec) : 250=16.27%, 500=83.39%, 750=0.34% 00:27:24.472 cpu : usr=98.34%, sys=1.24%, ctx=20, majf=0, minf=55 00:27:24.472 IO depths : 1=5.8%, 2=12.0%, 4=25.1%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename1: (groupid=0, jobs=1): err= 0: pid=3146122: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10157msec) 00:27:24.472 slat (nsec): min=7540, max=76706, avg=14419.13, stdev=9540.17 00:27:24.472 clat (msec): min=196, max=409, avg=266.38, stdev=39.58 00:27:24.472 lat (msec): min=196, max=409, avg=266.40, stdev=39.59 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 199], 5.00th=[ 205], 10.00th=[ 226], 20.00th=[ 247], 00:27:24.472 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 264], 00:27:24.472 | 70.00th=[ 266], 80.00th=[ 279], 90.00th=[ 321], 95.00th=[ 355], 00:27:24.472 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:27:24.472 | 99.99th=[ 409] 00:27:24.472 bw ( KiB/s): min= 128, max= 256, per=4.20%, avg=236.80, stdev=32.25, samples=20 00:27:24.472 iops : min= 32, max= 64, avg=59.20, stdev= 8.06, samples=20 00:27:24.472 lat (msec) : 250=26.97%, 500=73.03% 00:27:24.472 cpu : usr=98.40%, sys=1.16%, ctx=16, majf=0, minf=53 00:27:24.472 IO depths : 1=0.7%, 2=2.1%, 4=10.2%, 8=74.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename1: (groupid=0, jobs=1): err= 0: pid=3146123: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=60, BW=242KiB/s (248kB/s)(2456KiB/10148msec) 00:27:24.472 slat (nsec): min=4203, max=38827, avg=10610.76, stdev=3625.75 00:27:24.472 clat (msec): min=165, max=516, avg=263.27, stdev=54.36 00:27:24.472 lat (msec): min=165, max=516, avg=263.28, stdev=54.36 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 165], 5.00th=[ 222], 10.00th=[ 239], 20.00th=[ 247], 00:27:24.472 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:27:24.472 | 70.00th=[ 264], 80.00th=[ 264], 90.00th=[ 266], 95.00th=[ 435], 00:27:24.472 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:27:24.472 | 99.99th=[ 518] 00:27:24.472 bw ( KiB/s): min= 176, max= 256, per=4.47%, avg=251.79, stdev=18.35, samples=19 00:27:24.472 iops : min= 44, max= 64, avg=62.95, stdev= 4.59, samples=19 00:27:24.472 lat (msec) : 250=31.92%, 500=65.47%, 750=2.61% 00:27:24.472 cpu : usr=98.25%, sys=1.33%, ctx=17, majf=0, minf=52 00:27:24.472 IO depths : 1=0.5%, 2=1.8%, 4=10.1%, 8=75.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=89.8%, 8=4.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.472 filename1: (groupid=0, jobs=1): err= 0: pid=3146124: Mon Jul 15 11:53:31 2024 00:27:24.472 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10158msec) 00:27:24.472 slat (nsec): min=8223, max=71665, avg=16006.02, stdev=10259.49 00:27:24.472 clat (msec): min=184, max=359, avg=260.29, stdev=27.05 00:27:24.472 lat (msec): min=184, max=359, avg=260.31, stdev=27.06 00:27:24.472 clat percentiles (msec): 00:27:24.472 | 1.00th=[ 184], 5.00th=[ 220], 10.00th=[ 243], 20.00th=[ 247], 00:27:24.472 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 264], 00:27:24.472 | 70.00th=[ 266], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 338], 00:27:24.472 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:27:24.472 | 99.99th=[ 359] 00:27:24.472 bw ( KiB/s): min= 128, max= 256, per=4.32%, avg=243.20, stdev=39.40, samples=20 00:27:24.472 iops : min= 32, max= 64, avg=60.80, stdev= 9.85, samples=20 00:27:24.472 lat (msec) : 250=28.21%, 500=71.79% 00:27:24.472 cpu : usr=98.29%, sys=1.26%, ctx=26, majf=0, minf=47 00:27:24.472 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:24.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.472 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename1: (groupid=0, jobs=1): err= 0: pid=3146125: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=67, BW=270KiB/s (277kB/s)(2752KiB/10179msec) 00:27:24.473 slat (usec): min=3, max=204, avg=19.47, stdev=22.93 00:27:24.473 clat (msec): min=17, max=346, avg=236.10, stdev=62.14 00:27:24.473 lat (msec): min=17, max=346, avg=236.12, stdev=62.14 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 18], 5.00th=[ 68], 10.00th=[ 129], 20.00th=[ 245], 00:27:24.473 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 259], 00:27:24.473 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 266], 95.00th=[ 268], 00:27:24.473 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 347], 99.95th=[ 347], 00:27:24.473 | 99.99th=[ 347] 00:27:24.473 bw ( KiB/s): min= 144, max= 640, per=4.77%, avg=268.80, stdev=90.90, samples=20 00:27:24.473 iops : min= 36, max= 160, avg=67.20, stdev=22.72, samples=20 00:27:24.473 lat (msec) : 20=2.33%, 50=2.03%, 100=2.62%, 250=29.51%, 500=63.52% 00:27:24.473 cpu : usr=98.11%, sys=1.44%, ctx=17, majf=0, minf=152 00:27:24.473 IO depths : 1=0.3%, 2=6.4%, 4=24.4%, 8=56.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename1: (groupid=0, jobs=1): err= 0: pid=3146126: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=67, BW=270KiB/s (277kB/s)(2752KiB/10174msec) 00:27:24.473 slat (nsec): min=4165, max=43024, avg=11278.45, stdev=4705.65 00:27:24.473 clat (msec): min=12, max=328, avg=236.04, stdev=58.54 00:27:24.473 lat (msec): min=12, max=328, avg=236.05, stdev=58.54 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 13], 5.00th=[ 68], 10.00th=[ 169], 20.00th=[ 243], 00:27:24.473 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 259], 00:27:24.473 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 266], 95.00th=[ 266], 00:27:24.473 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 330], 99.95th=[ 330], 00:27:24.473 | 99.99th=[ 330] 00:27:24.473 bw ( KiB/s): min= 144, max= 513, per=4.77%, avg=268.85, stdev=69.56, samples=20 00:27:24.473 iops : min= 36, max= 128, avg=67.20, stdev=17.34, samples=20 00:27:24.473 lat (msec) : 20=2.62%, 50=2.03%, 100=2.33%, 250=30.52%, 500=62.50% 00:27:24.473 cpu : usr=98.27%, sys=1.26%, ctx=11, majf=0, minf=47 00:27:24.473 IO depths : 1=1.6%, 2=7.7%, 4=24.6%, 8=55.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename1: (groupid=0, jobs=1): err= 0: pid=3146127: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10154msec) 00:27:24.473 slat (nsec): min=4344, max=48267, avg=13715.00, stdev=7502.19 00:27:24.473 clat (msec): min=165, max=585, avg=260.18, stdev=50.76 00:27:24.473 lat (msec): min=165, max=585, avg=260.19, stdev=50.76 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 165], 5.00th=[ 205], 10.00th=[ 228], 20.00th=[ 247], 00:27:24.473 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.473 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 266], 95.00th=[ 288], 00:27:24.473 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 584], 99.95th=[ 584], 00:27:24.473 | 99.99th=[ 584] 00:27:24.473 bw ( KiB/s): min= 144, max= 368, per=4.54%, avg=256.00, stdev=37.33, samples=19 00:27:24.473 iops : min= 36, max= 92, avg=64.00, stdev= 9.33, samples=19 00:27:24.473 lat (msec) : 250=33.65%, 500=63.78%, 750=2.56% 00:27:24.473 cpu : usr=98.61%, sys=0.96%, ctx=14, majf=0, minf=52 00:27:24.473 IO depths : 1=0.8%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename1: (groupid=0, jobs=1): err= 0: pid=3146128: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=60, BW=243KiB/s (248kB/s)(2464KiB/10157msec) 00:27:24.473 slat (nsec): min=7985, max=41811, avg=11287.15, stdev=3934.75 00:27:24.473 clat (msec): min=202, max=491, avg=262.69, stdev=34.60 00:27:24.473 lat (msec): min=202, max=491, avg=262.70, stdev=34.60 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 207], 5.00th=[ 232], 10.00th=[ 241], 20.00th=[ 247], 00:27:24.473 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.473 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 338], 00:27:24.473 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 493], 99.95th=[ 493], 00:27:24.473 | 99.99th=[ 493] 00:27:24.473 bw ( KiB/s): min= 128, max= 256, per=4.27%, avg=240.00, stdev=34.04, samples=20 00:27:24.473 iops : min= 32, max= 64, avg=60.00, stdev= 8.51, samples=20 00:27:24.473 lat (msec) : 250=29.22%, 500=70.78% 00:27:24.473 cpu : usr=98.53%, sys=1.04%, ctx=19, majf=0, minf=46 00:27:24.473 IO depths : 1=0.6%, 2=1.5%, 4=8.4%, 8=77.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=89.3%, 8=5.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename1: (groupid=0, jobs=1): err= 0: pid=3146129: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=42, BW=170KiB/s (174kB/s)(1728KiB/10141msec) 00:27:24.473 slat (nsec): min=8149, max=55214, avg=13139.50, stdev=7868.62 00:27:24.473 clat (msec): min=185, max=672, avg=375.46, stdev=77.77 00:27:24.473 lat (msec): min=185, max=672, avg=375.47, stdev=77.77 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 186], 5.00th=[ 251], 10.00th=[ 268], 20.00th=[ 338], 00:27:24.473 | 30.00th=[ 342], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 401], 00:27:24.473 | 70.00th=[ 418], 80.00th=[ 435], 90.00th=[ 439], 95.00th=[ 514], 00:27:24.473 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 676], 99.95th=[ 676], 00:27:24.473 | 99.99th=[ 676] 00:27:24.473 bw ( KiB/s): min= 128, max= 256, per=3.11%, avg=175.16, stdev=61.85, samples=19 00:27:24.473 iops : min= 32, max= 64, avg=43.79, stdev=15.46, samples=19 00:27:24.473 lat (msec) : 250=4.63%, 500=89.81%, 750=5.56% 00:27:24.473 cpu : usr=98.41%, sys=1.16%, ctx=21, majf=0, minf=57 00:27:24.473 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename2: (groupid=0, jobs=1): err= 0: pid=3146130: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=58, BW=234KiB/s (239kB/s)(2376KiB/10160msec) 00:27:24.473 slat (nsec): min=12358, max=84614, avg=20664.05, stdev=6634.62 00:27:24.473 clat (msec): min=202, max=496, avg=272.88, stdev=47.64 00:27:24.473 lat (msec): min=202, max=496, avg=272.90, stdev=47.64 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 203], 5.00th=[ 228], 10.00th=[ 236], 20.00th=[ 247], 00:27:24.473 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 264], 00:27:24.473 | 70.00th=[ 268], 80.00th=[ 284], 90.00th=[ 347], 95.00th=[ 401], 00:27:24.473 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:27:24.473 | 99.99th=[ 498] 00:27:24.473 bw ( KiB/s): min= 128, max= 256, per=4.11%, avg=231.20, stdev=35.77, samples=20 00:27:24.473 iops : min= 32, max= 64, avg=57.80, stdev= 8.94, samples=20 00:27:24.473 lat (msec) : 250=32.32%, 500=67.68% 00:27:24.473 cpu : usr=98.14%, sys=1.27%, ctx=13, majf=0, minf=43 00:27:24.473 IO depths : 1=1.2%, 2=2.7%, 4=10.1%, 8=74.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=89.6%, 8=5.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename2: (groupid=0, jobs=1): err= 0: pid=3146131: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=66, BW=267KiB/s (273kB/s)(2712KiB/10175msec) 00:27:24.473 slat (nsec): min=5604, max=56226, avg=11439.69, stdev=5167.52 00:27:24.473 clat (msec): min=14, max=382, avg=238.50, stdev=70.49 00:27:24.473 lat (msec): min=14, max=382, avg=238.52, stdev=70.49 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 15], 5.00th=[ 68], 10.00th=[ 159], 20.00th=[ 222], 00:27:24.473 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:27:24.473 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 288], 95.00th=[ 326], 00:27:24.473 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:27:24.473 | 99.99th=[ 384] 00:27:24.473 bw ( KiB/s): min= 176, max= 640, per=4.70%, avg=264.80, stdev=90.73, samples=20 00:27:24.473 iops : min= 44, max= 160, avg=66.20, stdev=22.68, samples=20 00:27:24.473 lat (msec) : 20=2.06%, 50=2.36%, 100=3.54%, 250=31.86%, 500=60.18% 00:27:24.473 cpu : usr=97.96%, sys=1.61%, ctx=14, majf=0, minf=64 00:27:24.473 IO depths : 1=0.1%, 2=0.7%, 4=7.5%, 8=78.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=89.0%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename2: (groupid=0, jobs=1): err= 0: pid=3146132: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=57, BW=232KiB/s (237kB/s)(2352KiB/10147msec) 00:27:24.473 slat (usec): min=4, max=110, avg=14.00, stdev=13.22 00:27:24.473 clat (msec): min=196, max=597, avg=275.40, stdev=64.71 00:27:24.473 lat (msec): min=196, max=597, avg=275.42, stdev=64.71 00:27:24.473 clat percentiles (msec): 00:27:24.473 | 1.00th=[ 197], 5.00th=[ 236], 10.00th=[ 243], 20.00th=[ 249], 00:27:24.473 | 30.00th=[ 251], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 264], 00:27:24.473 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 334], 95.00th=[ 393], 00:27:24.473 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:27:24.473 | 99.99th=[ 600] 00:27:24.473 bw ( KiB/s): min= 128, max= 256, per=4.27%, avg=240.84, stdev=34.76, samples=19 00:27:24.473 iops : min= 32, max= 64, avg=60.21, stdev= 8.69, samples=19 00:27:24.473 lat (msec) : 250=25.68%, 500=71.60%, 750=2.72% 00:27:24.473 cpu : usr=98.41%, sys=1.16%, ctx=17, majf=0, minf=59 00:27:24.473 IO depths : 1=3.1%, 2=6.3%, 4=15.8%, 8=65.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:24.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 complete : 0=0.0%, 4=91.3%, 8=3.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.473 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.473 filename2: (groupid=0, jobs=1): err= 0: pid=3146133: Mon Jul 15 11:53:31 2024 00:27:24.473 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10160msec) 00:27:24.474 slat (usec): min=8, max=107, avg=24.18, stdev=24.45 00:27:24.474 clat (msec): min=202, max=506, avg=266.18, stdev=40.11 00:27:24.474 lat (msec): min=202, max=506, avg=266.21, stdev=40.11 00:27:24.474 clat percentiles (msec): 00:27:24.474 | 1.00th=[ 218], 5.00th=[ 230], 10.00th=[ 236], 20.00th=[ 247], 00:27:24.474 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.474 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 321], 95.00th=[ 351], 00:27:24.474 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 506], 99.95th=[ 506], 00:27:24.474 | 99.99th=[ 506] 00:27:24.474 bw ( KiB/s): min= 128, max= 256, per=4.20%, avg=236.80, stdev=34.67, samples=20 00:27:24.474 iops : min= 32, max= 64, avg=59.20, stdev= 8.67, samples=20 00:27:24.474 lat (msec) : 250=27.63%, 500=72.04%, 750=0.33% 00:27:24.474 cpu : usr=98.34%, sys=1.24%, ctx=15, majf=0, minf=48 00:27:24.474 IO depths : 1=0.7%, 2=2.0%, 4=9.7%, 8=75.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:24.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 complete : 0=0.0%, 4=89.6%, 8=5.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.474 filename2: (groupid=0, jobs=1): err= 0: pid=3146134: Mon Jul 15 11:53:31 2024 00:27:24.474 read: IOPS=51, BW=207KiB/s (212kB/s)(2104KiB/10143msec) 00:27:24.474 slat (usec): min=5, max=100, avg=25.93, stdev=25.73 00:27:24.474 clat (msec): min=183, max=676, avg=308.08, stdev=82.73 00:27:24.474 lat (msec): min=183, max=676, avg=308.11, stdev=82.74 00:27:24.474 clat percentiles (msec): 00:27:24.474 | 1.00th=[ 184], 5.00th=[ 230], 10.00th=[ 251], 20.00th=[ 259], 00:27:24.474 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 266], 60.00th=[ 268], 00:27:24.474 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 414], 95.00th=[ 443], 00:27:24.474 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 676], 99.95th=[ 676], 00:27:24.474 | 99.99th=[ 676] 00:27:24.474 bw ( KiB/s): min= 128, max= 256, per=3.81%, avg=214.74, stdev=58.98, samples=19 00:27:24.474 iops : min= 32, max= 64, avg=53.68, stdev=14.75, samples=19 00:27:24.474 lat (msec) : 250=9.89%, 500=87.07%, 750=3.04% 00:27:24.474 cpu : usr=98.14%, sys=1.33%, ctx=41, majf=0, minf=56 00:27:24.474 IO depths : 1=2.5%, 2=8.7%, 4=25.1%, 8=53.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:27:24.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.474 filename2: (groupid=0, jobs=1): err= 0: pid=3146135: Mon Jul 15 11:53:31 2024 00:27:24.474 read: IOPS=41, BW=164KiB/s (168kB/s)(1664KiB/10141msec) 00:27:24.474 slat (nsec): min=6077, max=58874, avg=11839.15, stdev=5690.92 00:27:24.474 clat (msec): min=224, max=591, avg=389.14, stdev=71.44 00:27:24.474 lat (msec): min=224, max=591, avg=389.15, stdev=71.44 00:27:24.474 clat percentiles (msec): 00:27:24.474 | 1.00th=[ 249], 5.00th=[ 266], 10.00th=[ 326], 20.00th=[ 334], 00:27:24.474 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 384], 60.00th=[ 414], 00:27:24.474 | 70.00th=[ 426], 80.00th=[ 439], 90.00th=[ 456], 95.00th=[ 527], 00:27:24.474 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:27:24.474 | 99.99th=[ 592] 00:27:24.474 bw ( KiB/s): min= 128, max= 256, per=2.99%, avg=168.42, stdev=54.22, samples=19 00:27:24.474 iops : min= 32, max= 64, avg=42.11, stdev=13.56, samples=19 00:27:24.474 lat (msec) : 250=1.44%, 500=91.35%, 750=7.21% 00:27:24.474 cpu : usr=98.19%, sys=1.24%, ctx=54, majf=0, minf=58 00:27:24.474 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:27:24.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.474 filename2: (groupid=0, jobs=1): err= 0: pid=3146136: Mon Jul 15 11:53:31 2024 00:27:24.474 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10146msec) 00:27:24.474 slat (nsec): min=5090, max=55984, avg=13406.25, stdev=7647.07 00:27:24.474 clat (msec): min=165, max=513, avg=266.86, stdev=56.48 00:27:24.474 lat (msec): min=165, max=513, avg=266.87, stdev=56.48 00:27:24.474 clat percentiles (msec): 00:27:24.474 | 1.00th=[ 165], 5.00th=[ 220], 10.00th=[ 243], 20.00th=[ 247], 00:27:24.474 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:27:24.474 | 70.00th=[ 266], 80.00th=[ 266], 90.00th=[ 338], 95.00th=[ 363], 00:27:24.474 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:27:24.474 | 99.99th=[ 514] 00:27:24.474 bw ( KiB/s): min= 128, max= 368, per=4.43%, avg=249.26, stdev=47.50, samples=19 00:27:24.474 iops : min= 32, max= 92, avg=62.32, stdev=11.87, samples=19 00:27:24.474 lat (msec) : 250=30.92%, 500=66.45%, 750=2.63% 00:27:24.474 cpu : usr=98.12%, sys=1.25%, ctx=29, majf=0, minf=42 00:27:24.474 IO depths : 1=3.0%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:24.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.474 filename2: (groupid=0, jobs=1): err= 0: pid=3146137: Mon Jul 15 11:53:31 2024 00:27:24.474 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10160msec) 00:27:24.474 slat (nsec): min=8539, max=78089, avg=16503.81, stdev=10562.83 00:27:24.474 clat (msec): min=184, max=465, avg=260.31, stdev=28.77 00:27:24.474 lat (msec): min=184, max=465, avg=260.33, stdev=28.78 00:27:24.474 clat percentiles (msec): 00:27:24.474 | 1.00th=[ 186], 5.00th=[ 222], 10.00th=[ 243], 20.00th=[ 247], 00:27:24.474 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 262], 00:27:24.474 | 70.00th=[ 266], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 338], 00:27:24.474 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 468], 99.95th=[ 468], 00:27:24.474 | 99.99th=[ 468] 00:27:24.474 bw ( KiB/s): min= 128, max= 256, per=4.32%, avg=243.20, stdev=36.93, samples=20 00:27:24.474 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:27:24.474 lat (msec) : 250=28.85%, 500=71.15% 00:27:24.474 cpu : usr=97.99%, sys=1.40%, ctx=43, majf=0, minf=67 00:27:24.474 IO depths : 1=0.8%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:24.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.474 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:24.474 00:27:24.474 Run status group 0 (all jobs): 00:27:24.474 READ: bw=5620KiB/s (5755kB/s), 164KiB/s-273KiB/s (168kB/s-279kB/s), io=55.9MiB (58.6MB), run=10141-10179msec 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.474 bdev_null0 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.474 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 [2024-07-15 11:53:31.416134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 bdev_null1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.475 { 00:27:24.475 "params": { 00:27:24.475 "name": "Nvme$subsystem", 00:27:24.475 "trtype": "$TEST_TRANSPORT", 00:27:24.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.475 "adrfam": "ipv4", 00:27:24.475 "trsvcid": "$NVMF_PORT", 00:27:24.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.475 "hdgst": ${hdgst:-false}, 00:27:24.475 "ddgst": ${ddgst:-false} 00:27:24.475 }, 00:27:24.475 "method": "bdev_nvme_attach_controller" 00:27:24.475 } 00:27:24.475 EOF 00:27:24.475 )") 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.475 { 00:27:24.475 "params": { 00:27:24.475 "name": "Nvme$subsystem", 00:27:24.475 "trtype": "$TEST_TRANSPORT", 00:27:24.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.475 "adrfam": "ipv4", 00:27:24.475 "trsvcid": "$NVMF_PORT", 00:27:24.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.475 "hdgst": ${hdgst:-false}, 00:27:24.475 "ddgst": ${ddgst:-false} 00:27:24.475 }, 00:27:24.475 "method": "bdev_nvme_attach_controller" 00:27:24.475 } 00:27:24.475 EOF 00:27:24.475 )") 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:24.475 "params": { 00:27:24.475 "name": "Nvme0", 00:27:24.475 "trtype": "tcp", 00:27:24.475 "traddr": "10.0.0.2", 00:27:24.475 "adrfam": "ipv4", 00:27:24.475 "trsvcid": "4420", 00:27:24.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.475 "hdgst": false, 00:27:24.475 "ddgst": false 00:27:24.475 }, 00:27:24.475 "method": "bdev_nvme_attach_controller" 00:27:24.475 },{ 00:27:24.475 "params": { 00:27:24.475 "name": "Nvme1", 00:27:24.475 "trtype": "tcp", 00:27:24.475 "traddr": "10.0.0.2", 00:27:24.475 "adrfam": "ipv4", 00:27:24.475 "trsvcid": "4420", 00:27:24.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.475 "hdgst": false, 00:27:24.475 "ddgst": false 00:27:24.475 }, 00:27:24.475 "method": "bdev_nvme_attach_controller" 00:27:24.475 }' 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:24.475 11:53:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.475 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:24.475 ... 00:27:24.475 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:24.475 ... 00:27:24.475 fio-3.35 00:27:24.475 Starting 4 threads 00:27:24.475 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.740 00:27:29.740 filename0: (groupid=0, jobs=1): err= 0: pid=3147523: Mon Jul 15 11:53:37 2024 00:27:29.740 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5001msec) 00:27:29.740 slat (usec): min=3, max=128, avg=19.68, stdev=11.46 00:27:29.740 clat (usec): min=1099, max=9917, avg=4219.22, stdev=621.49 00:27:29.740 lat (usec): min=1119, max=9931, avg=4238.90, stdev=621.30 00:27:29.740 clat percentiles (usec): 00:27:29.740 | 1.00th=[ 2671], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3949], 00:27:29.740 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:29.740 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5342], 00:27:29.740 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 9896], 00:27:29.740 | 99.99th=[ 9896] 00:27:29.740 bw ( KiB/s): min=14400, max=15728, per=24.78%, avg=14869.33, stdev=414.07, samples=9 00:27:29.740 iops : min= 1800, max= 1966, avg=1858.67, stdev=51.76, samples=9 00:27:29.740 lat (msec) : 2=0.34%, 4=24.94%, 10=74.72% 00:27:29.740 cpu : usr=86.48%, sys=8.32%, ctx=166, majf=0, minf=9 00:27:29.740 IO depths : 1=0.5%, 2=13.6%, 4=59.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 issued rwts: total=9326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:29.740 filename0: (groupid=0, jobs=1): err= 0: pid=3147524: Mon Jul 15 11:53:37 2024 00:27:29.740 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5002msec) 00:27:29.740 slat (nsec): min=3847, max=97749, avg=18671.37, stdev=11505.74 00:27:29.740 clat (usec): min=774, max=11620, avg=4211.20, stdev=603.15 00:27:29.740 lat (usec): min=792, max=11632, avg=4229.87, stdev=602.92 00:27:29.740 clat percentiles (usec): 00:27:29.740 | 1.00th=[ 2737], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3916], 00:27:29.740 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:29.740 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5276], 00:27:29.740 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7635], 99.95th=[ 8356], 00:27:29.740 | 99.99th=[11600] 00:27:29.740 bw ( KiB/s): min=14560, max=15344, per=24.87%, avg=14924.44, stdev=231.56, samples=9 00:27:29.740 iops : min= 1820, max= 1918, avg=1865.56, stdev=28.94, samples=9 00:27:29.740 lat (usec) : 1000=0.03% 00:27:29.740 lat (msec) : 2=0.29%, 4=24.98%, 10=74.67%, 20=0.02% 00:27:29.740 cpu : usr=94.96%, sys=4.52%, ctx=11, majf=0, minf=9 00:27:29.740 IO depths : 1=0.2%, 2=14.3%, 4=57.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 issued rwts: total=9354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:29.740 filename1: (groupid=0, jobs=1): err= 0: pid=3147525: Mon Jul 15 11:53:37 2024 00:27:29.740 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5001msec) 00:27:29.740 slat (nsec): min=3993, max=97707, avg=18317.08, stdev=11391.06 00:27:29.740 clat (usec): min=938, max=7729, avg=4252.99, stdev=594.53 00:27:29.740 lat (usec): min=959, max=7750, avg=4271.30, stdev=593.99 00:27:29.740 clat percentiles (usec): 00:27:29.740 | 1.00th=[ 2671], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 3982], 00:27:29.740 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:29.740 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5342], 00:27:29.740 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7570], 00:27:29.740 | 99.99th=[ 7701] 00:27:29.740 bw ( KiB/s): min=14480, max=15150, per=24.59%, avg=14760.67, stdev=217.63, samples=9 00:27:29.740 iops : min= 1810, max= 1893, avg=1845.00, stdev=27.04, samples=9 00:27:29.740 lat (usec) : 1000=0.01% 00:27:29.740 lat (msec) : 2=0.32%, 4=20.82%, 10=78.85% 00:27:29.740 cpu : usr=94.82%, sys=4.68%, ctx=11, majf=0, minf=9 00:27:29.740 IO depths : 1=0.2%, 2=12.4%, 4=60.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 issued rwts: total=9271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:29.740 filename1: (groupid=0, jobs=1): err= 0: pid=3147526: Mon Jul 15 11:53:37 2024 00:27:29.740 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5004msec) 00:27:29.740 slat (nsec): min=4209, max=73813, avg=14170.55, stdev=8461.79 00:27:29.740 clat (usec): min=853, max=9144, avg=4127.07, stdev=547.19 00:27:29.740 lat (usec): min=883, max=9156, avg=4141.24, stdev=547.45 00:27:29.740 clat percentiles (usec): 00:27:29.740 | 1.00th=[ 2606], 5.00th=[ 3228], 10.00th=[ 3523], 20.00th=[ 3818], 00:27:29.740 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:29.740 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:27:29.740 | 99.00th=[ 5997], 99.50th=[ 6587], 99.90th=[ 7570], 99.95th=[ 9110], 00:27:29.740 | 99.99th=[ 9110] 00:27:29.740 bw ( KiB/s): min=14688, max=16608, per=25.55%, avg=15335.80, stdev=526.53, samples=10 00:27:29.740 iops : min= 1836, max= 2076, avg=1916.90, stdev=65.82, samples=10 00:27:29.740 lat (usec) : 1000=0.05% 00:27:29.740 lat (msec) : 2=0.15%, 4=29.06%, 10=70.74% 00:27:29.740 cpu : usr=93.86%, sys=5.62%, ctx=7, majf=0, minf=9 00:27:29.740 IO depths : 1=0.3%, 2=10.7%, 4=61.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.740 issued rwts: total=9589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:29.740 00:27:29.740 Run status group 0 (all jobs): 00:27:29.740 READ: bw=58.6MiB/s (61.5MB/s), 14.5MiB/s-15.0MiB/s (15.2MB/s-15.7MB/s), io=293MiB (308MB), run=5001-5004msec 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.740 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 00:27:29.999 real 0m24.519s 00:27:29.999 user 4m35.694s 00:27:29.999 sys 0m6.867s 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 ************************************ 00:27:29.999 END TEST fio_dif_rand_params 00:27:29.999 ************************************ 00:27:29.999 11:53:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:29.999 11:53:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:29.999 11:53:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:29.999 11:53:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 ************************************ 00:27:29.999 START TEST fio_dif_digest 00:27:29.999 ************************************ 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 bdev_null0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:29.999 [2024-07-15 11:53:37.821466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.999 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.000 { 00:27:30.000 "params": { 00:27:30.000 "name": "Nvme$subsystem", 00:27:30.000 "trtype": "$TEST_TRANSPORT", 00:27:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.000 "adrfam": "ipv4", 00:27:30.000 "trsvcid": "$NVMF_PORT", 00:27:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.000 "hdgst": ${hdgst:-false}, 00:27:30.000 "ddgst": ${ddgst:-false} 00:27:30.000 }, 00:27:30.000 "method": "bdev_nvme_attach_controller" 00:27:30.000 } 00:27:30.000 EOF 00:27:30.000 )") 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:30.000 "params": { 00:27:30.000 "name": "Nvme0", 00:27:30.000 "trtype": "tcp", 00:27:30.000 "traddr": "10.0.0.2", 00:27:30.000 "adrfam": "ipv4", 00:27:30.000 "trsvcid": "4420", 00:27:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.000 "hdgst": true, 00:27:30.000 "ddgst": true 00:27:30.000 }, 00:27:30.000 "method": "bdev_nvme_attach_controller" 00:27:30.000 }' 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:30.000 11:53:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.258 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:30.258 ... 00:27:30.258 fio-3.35 00:27:30.258 Starting 3 threads 00:27:30.258 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.449 00:27:42.449 filename0: (groupid=0, jobs=1): err= 0: pid=3148278: Mon Jul 15 11:53:48 2024 00:27:42.449 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10007msec) 00:27:42.449 slat (nsec): min=5508, max=54274, avg=16147.52, stdev=5557.76 00:27:42.449 clat (usec): min=7618, max=57485, avg=15225.68, stdev=9041.97 00:27:42.449 lat (usec): min=7638, max=57513, avg=15241.83, stdev=9046.09 00:27:42.449 clat percentiles (usec): 00:27:42.449 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:27:42.449 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:27:42.449 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14746], 95.00th=[45876], 00:27:42.449 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54264], 99.95th=[57410], 00:27:42.449 | 99.99th=[57410] 00:27:42.449 bw ( KiB/s): min= 7680, max=31488, per=34.57%, avg=25164.80, stdev=8917.01, samples=20 00:27:42.449 iops : min= 60, max= 246, avg=196.60, stdev=69.66, samples=20 00:27:42.449 lat (msec) : 10=0.20%, 20=92.84%, 50=5.08%, 100=1.88% 00:27:42.449 cpu : usr=92.96%, sys=6.38%, ctx=118, majf=0, minf=63 00:27:42.449 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:42.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:42.449 filename0: (groupid=0, jobs=1): err= 0: pid=3148279: Mon Jul 15 11:53:48 2024 00:27:42.449 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10046msec) 00:27:42.449 slat (nsec): min=6097, max=63865, avg=17272.26, stdev=5336.03 00:27:42.449 clat (usec): min=10629, max=70600, avg=16388.47, stdev=10530.11 00:27:42.449 lat (usec): min=10644, max=70609, avg=16405.75, stdev=10530.49 00:27:42.449 clat percentiles (usec): 00:27:42.449 | 1.00th=[11469], 5.00th=[12125], 10.00th=[12387], 20.00th=[12780], 00:27:42.449 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:27:42.449 | 70.00th=[14222], 80.00th=[14746], 90.00th=[15664], 95.00th=[52691], 00:27:42.449 | 99.00th=[59507], 99.50th=[60556], 99.90th=[65799], 99.95th=[70779], 00:27:42.449 | 99.99th=[70779] 00:27:42.449 bw ( KiB/s): min= 6144, max=29184, per=32.21%, avg=23449.60, stdev=8668.82, samples=20 00:27:42.449 iops : min= 48, max= 228, avg=183.20, stdev=67.73, samples=20 00:27:42.449 lat (msec) : 20=93.40%, 50=0.38%, 100=6.22% 00:27:42.449 cpu : usr=86.86%, sys=9.54%, ctx=502, majf=0, minf=162 00:27:42.449 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:42.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:42.449 filename0: (groupid=0, jobs=1): err= 0: pid=3148280: Mon Jul 15 11:53:48 2024 00:27:42.449 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10046msec) 00:27:42.449 slat (nsec): min=5820, max=98081, avg=17367.78, stdev=6261.70 00:27:42.449 clat (usec): min=10499, max=60690, avg=15735.58, stdev=9390.69 00:27:42.449 lat (usec): min=10514, max=60711, avg=15752.95, stdev=9391.80 00:27:42.449 clat percentiles (usec): 00:27:42.449 | 1.00th=[10945], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:27:42.449 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:27:42.449 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15270], 95.00th=[47449], 00:27:42.449 | 99.00th=[53740], 99.50th=[55313], 99.90th=[60556], 99.95th=[60556], 00:27:42.449 | 99.99th=[60556] 00:27:42.449 bw ( KiB/s): min= 7424, max=29696, per=33.55%, avg=24425.20, stdev=8680.12, samples=20 00:27:42.449 iops : min= 58, max= 232, avg=190.80, stdev=67.80, samples=20 00:27:42.449 lat (msec) : 20=92.98%, 50=4.03%, 100=2.98% 00:27:42.449 cpu : usr=88.69%, sys=8.74%, ctx=531, majf=0, minf=160 00:27:42.449 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:42.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.449 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:42.449 00:27:42.449 Run status group 0 (all jobs): 00:27:42.449 READ: bw=71.1MiB/s (74.5MB/s), 22.8MiB/s-24.6MiB/s (23.9MB/s-25.8MB/s), io=714MiB (749MB), run=10007-10046msec 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.449 00:27:42.449 real 0m11.080s 00:27:42.449 user 0m28.007s 00:27:42.449 sys 0m2.716s 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.449 11:53:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.449 ************************************ 00:27:42.449 END TEST fio_dif_digest 00:27:42.449 ************************************ 00:27:42.449 11:53:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:42.449 11:53:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:42.449 11:53:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.449 rmmod nvme_tcp 00:27:42.449 rmmod nvme_fabrics 00:27:42.449 rmmod nvme_keyring 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:42.449 11:53:48 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3142214 ']' 00:27:42.450 11:53:48 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3142214 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3142214 ']' 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3142214 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3142214 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3142214' 00:27:42.450 killing process with pid 3142214 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3142214 00:27:42.450 11:53:48 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3142214 00:27:42.450 11:53:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:42.450 11:53:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.450 Waiting for block devices as requested 00:27:42.450 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:42.709 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:42.709 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:42.969 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:42.969 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:42.969 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:42.969 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:43.229 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:43.229 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:43.229 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:43.229 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:43.489 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:43.489 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:43.489 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:43.747 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:43.747 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:43.747 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:43.747 11:53:51 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.747 11:53:51 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.747 11:53:51 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.747 11:53:51 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.747 11:53:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.747 11:53:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:43.747 11:53:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.284 11:53:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.284 00:27:46.284 real 1m7.010s 00:27:46.284 user 6m30.845s 00:27:46.284 sys 0m19.287s 00:27:46.284 11:53:53 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.284 11:53:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:46.284 ************************************ 00:27:46.284 END TEST nvmf_dif 00:27:46.284 ************************************ 00:27:46.284 11:53:53 -- common/autotest_common.sh@1142 -- # return 0 00:27:46.284 11:53:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:46.284 11:53:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:46.284 11:53:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.284 11:53:53 -- common/autotest_common.sh@10 -- # set +x 00:27:46.284 ************************************ 00:27:46.284 START TEST nvmf_abort_qd_sizes 00:27:46.284 ************************************ 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:46.284 * Looking for test storage... 00:27:46.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.284 11:53:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.285 11:53:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:48.191 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:48.191 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:48.191 Found net devices under 0000:84:00.0: cvl_0_0 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:48.191 Found net devices under 0000:84:00.1: cvl_0_1 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.191 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:27:48.191 00:27:48.191 --- 10.0.0.2 ping statistics --- 00:27:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.191 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:48.191 00:27:48.191 --- 10.0.0.1 ping statistics --- 00:27:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.191 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:48.191 11:53:56 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.572 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:49.572 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:49.572 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:50.510 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.510 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3153226 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3153226 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3153226 ']' 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.770 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:50.770 [2024-07-15 11:53:58.562357] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:27:50.770 [2024-07-15 11:53:58.562425] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.770 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.770 [2024-07-15 11:53:58.621966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.770 [2024-07-15 11:53:58.724966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.770 [2024-07-15 11:53:58.725021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.770 [2024-07-15 11:53:58.725045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.770 [2024-07-15 11:53:58.725056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.771 [2024-07-15 11:53:58.725065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.771 [2024-07-15 11:53:58.725143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.771 [2024-07-15 11:53:58.725198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.771 [2024-07-15 11:53:58.725264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.771 [2024-07-15 11:53:58.725267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:51.030 11:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:51.030 ************************************ 00:27:51.030 START TEST spdk_target_abort 00:27:51.030 ************************************ 00:27:51.030 11:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:51.030 11:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:51.030 11:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:27:51.030 11:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.030 11:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 spdk_targetn1 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 [2024-07-15 11:54:01.750938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 [2024-07-15 11:54:01.783206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:54.311 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.311 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.594 Initializing NVMe Controllers 00:27:57.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:57.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:57.594 Initialization complete. Launching workers. 00:27:57.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11675, failed: 0 00:27:57.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1319, failed to submit 10356 00:27:57.594 success 758, unsuccess 561, failed 0 00:27:57.594 11:54:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:57.594 11:54:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:57.594 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.881 Initializing NVMe Controllers 00:28:00.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:00.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:00.881 Initialization complete. Launching workers. 00:28:00.881 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8509, failed: 0 00:28:00.881 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 7289 00:28:00.881 success 361, unsuccess 859, failed 0 00:28:00.881 11:54:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:00.881 11:54:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:00.881 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.168 Initializing NVMe Controllers 00:28:04.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:04.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:04.168 Initialization complete. Launching workers. 00:28:04.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31950, failed: 0 00:28:04.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2745, failed to submit 29205 00:28:04.168 success 541, unsuccess 2204, failed 0 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.168 11:54:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3153226 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3153226 ']' 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3153226 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3153226 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3153226' 00:28:05.108 killing process with pid 3153226 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3153226 00:28:05.108 11:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3153226 00:28:05.366 00:28:05.366 real 0m14.241s 00:28:05.366 user 0m53.760s 00:28:05.366 sys 0m2.821s 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.366 ************************************ 00:28:05.366 END TEST spdk_target_abort 00:28:05.366 ************************************ 00:28:05.366 11:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:05.366 11:54:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:05.366 11:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.366 11:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.366 11:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:05.366 ************************************ 00:28:05.366 START TEST kernel_target_abort 00:28:05.366 ************************************ 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:05.366 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.302 Waiting for block devices as requested 00:28:06.560 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:06.560 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:06.818 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:06.818 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:06.819 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:07.077 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:07.077 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:07.077 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:07.077 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:07.337 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:07.337 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:07.337 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:07.337 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:07.597 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:07.597 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:07.597 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:07.597 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.857 No valid GPT data, bailing 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.857 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:28:07.857 00:28:07.857 Discovery Log Number of Records 2, Generation counter 2 00:28:07.857 =====Discovery Log Entry 0====== 00:28:07.857 trtype: tcp 00:28:07.857 adrfam: ipv4 00:28:07.857 subtype: current discovery subsystem 00:28:07.857 treq: not specified, sq flow control disable supported 00:28:07.857 portid: 1 00:28:07.857 trsvcid: 4420 00:28:07.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.858 traddr: 10.0.0.1 00:28:07.858 eflags: none 00:28:07.858 sectype: none 00:28:07.858 =====Discovery Log Entry 1====== 00:28:07.858 trtype: tcp 00:28:07.858 adrfam: ipv4 00:28:07.858 subtype: nvme subsystem 00:28:07.858 treq: not specified, sq flow control disable supported 00:28:07.858 portid: 1 00:28:07.858 trsvcid: 4420 00:28:07.858 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:07.858 traddr: 10.0.0.1 00:28:07.858 eflags: none 00:28:07.858 sectype: none 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:07.858 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:08.117 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:08.117 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:08.117 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:08.117 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:08.117 11:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:08.117 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.406 Initializing NVMe Controllers 00:28:11.406 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:11.406 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:11.406 Initialization complete. Launching workers. 00:28:11.406 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52633, failed: 0 00:28:11.406 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52633, failed to submit 0 00:28:11.406 success 0, unsuccess 52633, failed 0 00:28:11.406 11:54:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:11.406 11:54:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:11.406 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.780 Initializing NVMe Controllers 00:28:14.780 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:14.780 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:14.780 Initialization complete. Launching workers. 00:28:14.780 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95130, failed: 0 00:28:14.780 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24006, failed to submit 71124 00:28:14.780 success 0, unsuccess 24006, failed 0 00:28:14.780 11:54:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:14.780 11:54:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:14.780 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.310 Initializing NVMe Controllers 00:28:17.310 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.310 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:17.310 Initialization complete. Launching workers. 00:28:17.310 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92470, failed: 0 00:28:17.310 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23098, failed to submit 69372 00:28:17.310 success 0, unsuccess 23098, failed 0 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:17.310 11:54:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.685 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:18.685 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:18.685 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:19.622 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:19.882 00:28:19.882 real 0m14.412s 00:28:19.882 user 0m6.307s 00:28:19.882 sys 0m3.293s 00:28:19.882 11:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.882 11:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.882 ************************************ 00:28:19.882 END TEST kernel_target_abort 00:28:19.882 ************************************ 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:19.882 rmmod nvme_tcp 00:28:19.882 rmmod nvme_fabrics 00:28:19.882 rmmod nvme_keyring 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3153226 ']' 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3153226 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3153226 ']' 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3153226 00:28:19.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3153226) - No such process 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3153226 is not found' 00:28:19.882 Process with pid 3153226 is not found 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:19.882 11:54:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:20.843 Waiting for block devices as requested 00:28:21.101 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:21.101 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:21.359 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:21.359 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:21.359 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:21.618 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:21.618 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:21.618 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:21.618 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:21.878 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:21.878 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:21.878 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:22.159 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:22.159 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:22.159 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:22.159 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:22.417 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:22.417 11:54:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.952 11:54:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.952 00:28:24.952 real 0m38.520s 00:28:24.952 user 1m2.290s 00:28:24.952 sys 0m9.717s 00:28:24.952 11:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.952 11:54:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:24.952 ************************************ 00:28:24.952 END TEST nvmf_abort_qd_sizes 00:28:24.952 ************************************ 00:28:24.952 11:54:32 -- common/autotest_common.sh@1142 -- # return 0 00:28:24.952 11:54:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:24.952 11:54:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:24.952 11:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.952 11:54:32 -- common/autotest_common.sh@10 -- # set +x 00:28:24.952 ************************************ 00:28:24.952 START TEST keyring_file 00:28:24.952 ************************************ 00:28:24.952 11:54:32 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:24.952 * Looking for test storage... 00:28:24.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:24.952 11:54:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:24.952 11:54:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.952 11:54:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.952 11:54:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.952 11:54:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.952 11:54:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.953 11:54:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.953 11:54:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.953 11:54:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.953 11:54:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:24.953 11:54:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.91ZaHMJgY3 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.91ZaHMJgY3 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.91ZaHMJgY3 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.91ZaHMJgY3 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LYxLNNQpeZ 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:24.953 11:54:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LYxLNNQpeZ 00:28:24.953 11:54:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LYxLNNQpeZ 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LYxLNNQpeZ 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=3159634 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:24.953 11:54:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3159634 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3159634 ']' 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.953 11:54:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:24.953 [2024-07-15 11:54:32.585520] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:24.953 [2024-07-15 11:54:32.585625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159634 ] 00:28:24.953 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.953 [2024-07-15 11:54:32.642783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.953 [2024-07-15 11:54:32.748512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.211 11:54:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.211 11:54:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:25.211 11:54:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:25.211 11:54:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.211 11:54:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:25.211 [2024-07-15 11:54:33.005652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.211 null0 00:28:25.211 [2024-07-15 11:54:33.037711] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:25.211 [2024-07-15 11:54:33.038243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:25.211 [2024-07-15 11:54:33.045750] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.211 11:54:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:25.211 [2024-07-15 11:54:33.057774] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:25.211 request: 00:28:25.211 { 00:28:25.211 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.211 "secure_channel": false, 00:28:25.211 "listen_address": { 00:28:25.211 "trtype": "tcp", 00:28:25.211 "traddr": "127.0.0.1", 00:28:25.211 "trsvcid": "4420" 00:28:25.211 }, 00:28:25.211 "method": "nvmf_subsystem_add_listener", 00:28:25.211 "req_id": 1 00:28:25.211 } 00:28:25.211 Got JSON-RPC error response 00:28:25.211 response: 00:28:25.211 { 00:28:25.211 "code": -32602, 00:28:25.211 "message": "Invalid parameters" 00:28:25.211 } 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.211 11:54:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=3159651 00:28:25.211 11:54:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3159651 /var/tmp/bperf.sock 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3159651 ']' 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.211 11:54:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:25.211 11:54:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:25.211 [2024-07-15 11:54:33.107774] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:25.211 [2024-07-15 11:54:33.107845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159651 ] 00:28:25.211 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.211 [2024-07-15 11:54:33.165075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.468 [2024-07-15 11:54:33.277462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.468 11:54:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.468 11:54:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:25.468 11:54:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:25.468 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:25.725 11:54:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LYxLNNQpeZ 00:28:25.725 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LYxLNNQpeZ 00:28:25.982 11:54:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:25.982 11:54:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:25.982 11:54:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.982 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.982 11:54:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.239 11:54:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.91ZaHMJgY3 == \/\t\m\p\/\t\m\p\.\9\1\Z\a\H\M\J\g\Y\3 ]] 00:28:26.239 11:54:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:26.240 11:54:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:26.240 11:54:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.240 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.240 11:54:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:26.497 11:54:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LYxLNNQpeZ == \/\t\m\p\/\t\m\p\.\L\Y\x\L\N\N\Q\p\e\Z ]] 00:28:26.497 11:54:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:26.497 11:54:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:26.497 11:54:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.497 11:54:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.497 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.497 11:54:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.755 11:54:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:26.755 11:54:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:26.755 11:54:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:26.755 11:54:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.755 11:54:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.755 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.755 11:54:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:27.013 11:54:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:27.013 11:54:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.013 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.271 [2024-07-15 11:54:35.093389] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:27.271 nvme0n1 00:28:27.271 11:54:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:27.271 11:54:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:27.271 11:54:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:27.271 11:54:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.271 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.271 11:54:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:27.529 11:54:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:27.529 11:54:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:27.529 11:54:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:27.529 11:54:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:27.529 11:54:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.529 11:54:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:27.529 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.786 11:54:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:27.786 11:54:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.046 Running I/O for 1 seconds... 00:28:28.980 00:28:28.980 Latency(us) 00:28:28.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.980 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:28.980 nvme0n1 : 1.01 9430.05 36.84 0.00 0.00 13513.03 7524.50 24660.95 00:28:28.980 =================================================================================================================== 00:28:28.980 Total : 9430.05 36.84 0.00 0.00 13513.03 7524.50 24660.95 00:28:28.980 0 00:28:28.980 11:54:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:28.980 11:54:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:29.238 11:54:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:29.238 11:54:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.238 11:54:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.238 11:54:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.238 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.238 11:54:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.496 11:54:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:29.496 11:54:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:29.496 11:54:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:29.496 11:54:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.496 11:54:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.496 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.496 11:54:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:29.753 11:54:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:29.754 11:54:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.754 11:54:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:29.754 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:30.011 [2024-07-15 11:54:37.796557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:30.011 [2024-07-15 11:54:37.797138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7abd0 (107): Transport endpoint is not connected 00:28:30.011 [2024-07-15 11:54:37.798130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7abd0 (9): Bad file descriptor 00:28:30.011 [2024-07-15 11:54:37.799130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:30.011 [2024-07-15 11:54:37.799150] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:30.011 [2024-07-15 11:54:37.799177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:30.011 request: 00:28:30.011 { 00:28:30.011 "name": "nvme0", 00:28:30.011 "trtype": "tcp", 00:28:30.011 "traddr": "127.0.0.1", 00:28:30.011 "adrfam": "ipv4", 00:28:30.011 "trsvcid": "4420", 00:28:30.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:30.011 "prchk_reftag": false, 00:28:30.011 "prchk_guard": false, 00:28:30.011 "hdgst": false, 00:28:30.011 "ddgst": false, 00:28:30.011 "psk": "key1", 00:28:30.011 "method": "bdev_nvme_attach_controller", 00:28:30.011 "req_id": 1 00:28:30.011 } 00:28:30.011 Got JSON-RPC error response 00:28:30.011 response: 00:28:30.011 { 00:28:30.011 "code": -5, 00:28:30.011 "message": "Input/output error" 00:28:30.011 } 00:28:30.011 11:54:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:30.011 11:54:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:30.011 11:54:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:30.011 11:54:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:30.011 11:54:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:30.011 11:54:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:30.011 11:54:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:30.011 11:54:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:30.011 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.011 11:54:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:30.270 11:54:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:30.270 11:54:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:30.270 11:54:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:30.270 11:54:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:30.270 11:54:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:30.270 11:54:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:30.270 11:54:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.528 11:54:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:30.528 11:54:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:30.528 11:54:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:30.786 11:54:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:30.786 11:54:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:31.044 11:54:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:31.044 11:54:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:31.044 11:54:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:31.302 11:54:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:31.302 11:54:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.91ZaHMJgY3 00:28:31.302 11:54:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:31.302 11:54:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.302 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.560 [2024-07-15 11:54:39.290173] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.91ZaHMJgY3': 0100660 00:28:31.560 [2024-07-15 11:54:39.290210] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:31.560 request: 00:28:31.560 { 00:28:31.560 "name": "key0", 00:28:31.560 "path": "/tmp/tmp.91ZaHMJgY3", 00:28:31.560 "method": "keyring_file_add_key", 00:28:31.560 "req_id": 1 00:28:31.560 } 00:28:31.560 Got JSON-RPC error response 00:28:31.560 response: 00:28:31.560 { 00:28:31.560 "code": -1, 00:28:31.560 "message": "Operation not permitted" 00:28:31.560 } 00:28:31.560 11:54:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:31.560 11:54:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:31.560 11:54:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:31.560 11:54:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:31.560 11:54:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.91ZaHMJgY3 00:28:31.560 11:54:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.560 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.91ZaHMJgY3 00:28:31.560 11:54:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.91ZaHMJgY3 00:28:31.818 11:54:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:31.818 11:54:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:31.818 11:54:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:31.818 11:54:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:31.818 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:32.076 [2024-07-15 11:54:40.048286] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.91ZaHMJgY3': No such file or directory 00:28:32.076 [2024-07-15 11:54:40.048325] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:32.076 [2024-07-15 11:54:40.048367] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:32.076 [2024-07-15 11:54:40.048379] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:32.076 [2024-07-15 11:54:40.048398] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:32.076 request: 00:28:32.076 { 00:28:32.076 "name": "nvme0", 00:28:32.076 "trtype": "tcp", 00:28:32.076 "traddr": "127.0.0.1", 00:28:32.076 "adrfam": "ipv4", 00:28:32.076 "trsvcid": "4420", 00:28:32.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.076 "prchk_reftag": false, 00:28:32.076 "prchk_guard": false, 00:28:32.076 "hdgst": false, 00:28:32.076 "ddgst": false, 00:28:32.076 "psk": "key0", 00:28:32.076 "method": "bdev_nvme_attach_controller", 00:28:32.076 "req_id": 1 00:28:32.076 } 00:28:32.076 Got JSON-RPC error response 00:28:32.076 response: 00:28:32.076 { 00:28:32.076 "code": -19, 00:28:32.076 "message": "No such device" 00:28:32.076 } 00:28:32.334 11:54:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:32.334 11:54:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:32.334 11:54:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:32.334 11:54:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:32.334 11:54:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:32.334 11:54:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NRTdJb99PS 00:28:32.334 11:54:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:32.334 11:54:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:32.593 11:54:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NRTdJb99PS 00:28:32.593 11:54:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NRTdJb99PS 00:28:32.593 11:54:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NRTdJb99PS 00:28:32.593 11:54:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NRTdJb99PS 00:28:32.593 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NRTdJb99PS 00:28:32.853 11:54:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:32.853 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:33.111 nvme0n1 00:28:33.111 11:54:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:33.111 11:54:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:33.111 11:54:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.111 11:54:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.111 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.111 11:54:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.368 11:54:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:33.368 11:54:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:33.368 11:54:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:33.626 11:54:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:33.626 11:54:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:33.626 11:54:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.626 11:54:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.626 11:54:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.884 11:54:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:33.884 11:54:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:33.884 11:54:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:33.884 11:54:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.884 11:54:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.884 11:54:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.884 11:54:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.142 11:54:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:34.142 11:54:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:34.142 11:54:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:34.400 11:54:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:34.400 11:54:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.400 11:54:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:34.658 11:54:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:34.658 11:54:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NRTdJb99PS 00:28:34.658 11:54:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NRTdJb99PS 00:28:34.658 11:54:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LYxLNNQpeZ 00:28:34.658 11:54:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LYxLNNQpeZ 00:28:34.953 11:54:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:34.953 11:54:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:35.240 nvme0n1 00:28:35.240 11:54:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:35.240 11:54:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:35.803 11:54:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:35.803 "subsystems": [ 00:28:35.803 { 00:28:35.803 "subsystem": "keyring", 00:28:35.803 "config": [ 00:28:35.803 { 00:28:35.803 "method": "keyring_file_add_key", 00:28:35.803 "params": { 00:28:35.803 "name": "key0", 00:28:35.803 "path": "/tmp/tmp.NRTdJb99PS" 00:28:35.803 } 00:28:35.803 }, 00:28:35.803 { 00:28:35.803 "method": "keyring_file_add_key", 00:28:35.803 "params": { 00:28:35.803 "name": "key1", 00:28:35.803 "path": "/tmp/tmp.LYxLNNQpeZ" 00:28:35.803 } 00:28:35.803 } 00:28:35.803 ] 00:28:35.803 }, 00:28:35.803 { 00:28:35.803 "subsystem": "iobuf", 00:28:35.803 "config": [ 00:28:35.803 { 00:28:35.803 "method": "iobuf_set_options", 00:28:35.803 "params": { 00:28:35.803 "small_pool_count": 8192, 00:28:35.803 "large_pool_count": 1024, 00:28:35.803 "small_bufsize": 8192, 00:28:35.803 "large_bufsize": 135168 00:28:35.803 } 00:28:35.803 } 00:28:35.803 ] 00:28:35.803 }, 00:28:35.803 { 00:28:35.804 "subsystem": "sock", 00:28:35.804 "config": [ 00:28:35.804 { 00:28:35.804 "method": "sock_set_default_impl", 00:28:35.804 "params": { 00:28:35.804 "impl_name": "posix" 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "sock_impl_set_options", 00:28:35.804 "params": { 00:28:35.804 "impl_name": "ssl", 00:28:35.804 "recv_buf_size": 4096, 00:28:35.804 "send_buf_size": 4096, 00:28:35.804 "enable_recv_pipe": true, 00:28:35.804 "enable_quickack": false, 00:28:35.804 "enable_placement_id": 0, 00:28:35.804 "enable_zerocopy_send_server": true, 00:28:35.804 "enable_zerocopy_send_client": false, 00:28:35.804 "zerocopy_threshold": 0, 00:28:35.804 "tls_version": 0, 00:28:35.804 "enable_ktls": false 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "sock_impl_set_options", 00:28:35.804 "params": { 00:28:35.804 "impl_name": "posix", 00:28:35.804 "recv_buf_size": 2097152, 00:28:35.804 "send_buf_size": 2097152, 00:28:35.804 "enable_recv_pipe": true, 00:28:35.804 "enable_quickack": false, 00:28:35.804 "enable_placement_id": 0, 00:28:35.804 "enable_zerocopy_send_server": true, 00:28:35.804 "enable_zerocopy_send_client": false, 00:28:35.804 "zerocopy_threshold": 0, 00:28:35.804 "tls_version": 0, 00:28:35.804 "enable_ktls": false 00:28:35.804 } 00:28:35.804 } 00:28:35.804 ] 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "subsystem": "vmd", 00:28:35.804 "config": [] 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "subsystem": "accel", 00:28:35.804 "config": [ 00:28:35.804 { 00:28:35.804 "method": "accel_set_options", 00:28:35.804 "params": { 00:28:35.804 "small_cache_size": 128, 00:28:35.804 "large_cache_size": 16, 00:28:35.804 "task_count": 2048, 00:28:35.804 "sequence_count": 2048, 00:28:35.804 "buf_count": 2048 00:28:35.804 } 00:28:35.804 } 00:28:35.804 ] 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "subsystem": "bdev", 00:28:35.804 "config": [ 00:28:35.804 { 00:28:35.804 "method": "bdev_set_options", 00:28:35.804 "params": { 00:28:35.804 "bdev_io_pool_size": 65535, 00:28:35.804 "bdev_io_cache_size": 256, 00:28:35.804 "bdev_auto_examine": true, 00:28:35.804 "iobuf_small_cache_size": 128, 00:28:35.804 "iobuf_large_cache_size": 16 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_raid_set_options", 00:28:35.804 "params": { 00:28:35.804 "process_window_size_kb": 1024 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_iscsi_set_options", 00:28:35.804 "params": { 00:28:35.804 "timeout_sec": 30 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_nvme_set_options", 00:28:35.804 "params": { 00:28:35.804 "action_on_timeout": "none", 00:28:35.804 "timeout_us": 0, 00:28:35.804 "timeout_admin_us": 0, 00:28:35.804 "keep_alive_timeout_ms": 10000, 00:28:35.804 "arbitration_burst": 0, 00:28:35.804 "low_priority_weight": 0, 00:28:35.804 "medium_priority_weight": 0, 00:28:35.804 "high_priority_weight": 0, 00:28:35.804 "nvme_adminq_poll_period_us": 10000, 00:28:35.804 "nvme_ioq_poll_period_us": 0, 00:28:35.804 "io_queue_requests": 512, 00:28:35.804 "delay_cmd_submit": true, 00:28:35.804 "transport_retry_count": 4, 00:28:35.804 "bdev_retry_count": 3, 00:28:35.804 "transport_ack_timeout": 0, 00:28:35.804 "ctrlr_loss_timeout_sec": 0, 00:28:35.804 "reconnect_delay_sec": 0, 00:28:35.804 "fast_io_fail_timeout_sec": 0, 00:28:35.804 "disable_auto_failback": false, 00:28:35.804 "generate_uuids": false, 00:28:35.804 "transport_tos": 0, 00:28:35.804 "nvme_error_stat": false, 00:28:35.804 "rdma_srq_size": 0, 00:28:35.804 "io_path_stat": false, 00:28:35.804 "allow_accel_sequence": false, 00:28:35.804 "rdma_max_cq_size": 0, 00:28:35.804 "rdma_cm_event_timeout_ms": 0, 00:28:35.804 "dhchap_digests": [ 00:28:35.804 "sha256", 00:28:35.804 "sha384", 00:28:35.804 "sha512" 00:28:35.804 ], 00:28:35.804 "dhchap_dhgroups": [ 00:28:35.804 "null", 00:28:35.804 "ffdhe2048", 00:28:35.804 "ffdhe3072", 00:28:35.804 "ffdhe4096", 00:28:35.804 "ffdhe6144", 00:28:35.804 "ffdhe8192" 00:28:35.804 ] 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_nvme_attach_controller", 00:28:35.804 "params": { 00:28:35.804 "name": "nvme0", 00:28:35.804 "trtype": "TCP", 00:28:35.804 "adrfam": "IPv4", 00:28:35.804 "traddr": "127.0.0.1", 00:28:35.804 "trsvcid": "4420", 00:28:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.804 "prchk_reftag": false, 00:28:35.804 "prchk_guard": false, 00:28:35.804 "ctrlr_loss_timeout_sec": 0, 00:28:35.804 "reconnect_delay_sec": 0, 00:28:35.804 "fast_io_fail_timeout_sec": 0, 00:28:35.804 "psk": "key0", 00:28:35.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.804 "hdgst": false, 00:28:35.804 "ddgst": false 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_nvme_set_hotplug", 00:28:35.804 "params": { 00:28:35.804 "period_us": 100000, 00:28:35.804 "enable": false 00:28:35.804 } 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "method": "bdev_wait_for_examine" 00:28:35.804 } 00:28:35.804 ] 00:28:35.804 }, 00:28:35.804 { 00:28:35.804 "subsystem": "nbd", 00:28:35.804 "config": [] 00:28:35.804 } 00:28:35.804 ] 00:28:35.804 }' 00:28:35.804 11:54:43 keyring_file -- keyring/file.sh@114 -- # killprocess 3159651 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3159651 ']' 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3159651 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159651 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159651' 00:28:35.804 killing process with pid 3159651 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@967 -- # kill 3159651 00:28:35.804 Received shutdown signal, test time was about 1.000000 seconds 00:28:35.804 00:28:35.804 Latency(us) 00:28:35.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.804 =================================================================================================================== 00:28:35.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.804 11:54:43 keyring_file -- common/autotest_common.sh@972 -- # wait 3159651 00:28:36.062 11:54:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=3161109 00:28:36.062 11:54:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3161109 /var/tmp/bperf.sock 00:28:36.062 11:54:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3161109 ']' 00:28:36.062 11:54:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.062 11:54:43 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:36.062 11:54:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.063 11:54:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.063 11:54:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:36.063 "subsystems": [ 00:28:36.063 { 00:28:36.063 "subsystem": "keyring", 00:28:36.063 "config": [ 00:28:36.063 { 00:28:36.063 "method": "keyring_file_add_key", 00:28:36.063 "params": { 00:28:36.063 "name": "key0", 00:28:36.063 "path": "/tmp/tmp.NRTdJb99PS" 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "keyring_file_add_key", 00:28:36.063 "params": { 00:28:36.063 "name": "key1", 00:28:36.063 "path": "/tmp/tmp.LYxLNNQpeZ" 00:28:36.063 } 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "iobuf", 00:28:36.063 "config": [ 00:28:36.063 { 00:28:36.063 "method": "iobuf_set_options", 00:28:36.063 "params": { 00:28:36.063 "small_pool_count": 8192, 00:28:36.063 "large_pool_count": 1024, 00:28:36.063 "small_bufsize": 8192, 00:28:36.063 "large_bufsize": 135168 00:28:36.063 } 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "sock", 00:28:36.063 "config": [ 00:28:36.063 { 00:28:36.063 "method": "sock_set_default_impl", 00:28:36.063 "params": { 00:28:36.063 "impl_name": "posix" 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "sock_impl_set_options", 00:28:36.063 "params": { 00:28:36.063 "impl_name": "ssl", 00:28:36.063 "recv_buf_size": 4096, 00:28:36.063 "send_buf_size": 4096, 00:28:36.063 "enable_recv_pipe": true, 00:28:36.063 "enable_quickack": false, 00:28:36.063 "enable_placement_id": 0, 00:28:36.063 "enable_zerocopy_send_server": true, 00:28:36.063 "enable_zerocopy_send_client": false, 00:28:36.063 "zerocopy_threshold": 0, 00:28:36.063 "tls_version": 0, 00:28:36.063 "enable_ktls": false 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "sock_impl_set_options", 00:28:36.063 "params": { 00:28:36.063 "impl_name": "posix", 00:28:36.063 "recv_buf_size": 2097152, 00:28:36.063 "send_buf_size": 2097152, 00:28:36.063 "enable_recv_pipe": true, 00:28:36.063 "enable_quickack": false, 00:28:36.063 "enable_placement_id": 0, 00:28:36.063 "enable_zerocopy_send_server": true, 00:28:36.063 "enable_zerocopy_send_client": false, 00:28:36.063 "zerocopy_threshold": 0, 00:28:36.063 "tls_version": 0, 00:28:36.063 "enable_ktls": false 00:28:36.063 } 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "vmd", 00:28:36.063 "config": [] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "accel", 00:28:36.063 "config": [ 00:28:36.063 { 00:28:36.063 "method": "accel_set_options", 00:28:36.063 "params": { 00:28:36.063 "small_cache_size": 128, 00:28:36.063 "large_cache_size": 16, 00:28:36.063 "task_count": 2048, 00:28:36.063 "sequence_count": 2048, 00:28:36.063 "buf_count": 2048 00:28:36.063 } 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "bdev", 00:28:36.063 "config": [ 00:28:36.063 { 00:28:36.063 "method": "bdev_set_options", 00:28:36.063 "params": { 00:28:36.063 "bdev_io_pool_size": 65535, 00:28:36.063 "bdev_io_cache_size": 256, 00:28:36.063 "bdev_auto_examine": true, 00:28:36.063 "iobuf_small_cache_size": 128, 00:28:36.063 "iobuf_large_cache_size": 16 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_raid_set_options", 00:28:36.063 "params": { 00:28:36.063 "process_window_size_kb": 1024 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_iscsi_set_options", 00:28:36.063 "params": { 00:28:36.063 "timeout_sec": 30 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_nvme_set_options", 00:28:36.063 "params": { 00:28:36.063 "action_on_timeout": "none", 00:28:36.063 "timeout_us": 0, 00:28:36.063 "timeout_admin_us": 0, 00:28:36.063 "keep_alive_timeout_ms": 10000, 00:28:36.063 "arbitration_burst": 0, 00:28:36.063 "low_priority_weight": 0, 00:28:36.063 "medium_priority_weight": 0, 00:28:36.063 "high_priority_weight": 0, 00:28:36.063 "nvme_adminq_poll_period_us": 10000, 00:28:36.063 "nvme_ioq_poll_period_us": 0, 00:28:36.063 "io_queue_requests": 512, 00:28:36.063 "delay_cmd_submit": true, 00:28:36.063 "transport_retry_count": 4, 00:28:36.063 "bdev_retry_count": 3, 00:28:36.063 "transport_ack_timeout": 0, 00:28:36.063 "ctrlr_loss_timeout_sec": 0, 00:28:36.063 "reconnect_delay_sec": 0, 00:28:36.063 "fast_io_fail_timeout_sec": 0, 00:28:36.063 "disable_auto_failback": false, 00:28:36.063 "generate_uuids": false, 00:28:36.063 "transport_tos": 0, 00:28:36.063 "nvme_error_stat": false, 00:28:36.063 "rdma_srq_size": 0, 00:28:36.063 "io_path_stat": false, 00:28:36.063 "allow_accel_sequence": false, 00:28:36.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.063 "rdma_max_cq_size": 0, 00:28:36.063 "rdma_cm_event_timeout_ms": 0, 00:28:36.063 "dhchap_digests": [ 00:28:36.063 "sha256", 00:28:36.063 "sha384", 00:28:36.063 "sha512" 00:28:36.063 ], 00:28:36.063 "dhchap_dhgroups": [ 00:28:36.063 "null", 00:28:36.063 "ffdhe2048", 00:28:36.063 "ffdhe3072", 00:28:36.063 "ffdhe4096", 00:28:36.063 "ffdhe6144", 00:28:36.063 "ffdhe8192" 00:28:36.063 ] 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_nvme_attach_controller", 00:28:36.063 "params": { 00:28:36.063 "name": "nvme0", 00:28:36.063 "trtype": "TCP", 00:28:36.063 "adrfam": "IPv4", 00:28:36.063 "traddr": "127.0.0.1", 00:28:36.063 "trsvcid": "4420", 00:28:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.063 "prchk_reftag": false, 00:28:36.063 "prchk_guard": false, 00:28:36.063 "ctrlr_loss_timeout_sec": 0, 00:28:36.063 "reconnect_delay_sec": 0, 00:28:36.063 "fast_io_fail_timeout_sec": 0, 00:28:36.063 "psk": "key0", 00:28:36.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.063 "hdgst": false, 00:28:36.063 "ddgst": false 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_nvme_set_hotplug", 00:28:36.063 "params": { 00:28:36.063 "period_us": 100000, 00:28:36.063 "enable": false 00:28:36.063 } 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "method": "bdev_wait_for_examine" 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }, 00:28:36.063 { 00:28:36.063 "subsystem": "nbd", 00:28:36.063 "config": [] 00:28:36.063 } 00:28:36.063 ] 00:28:36.063 }' 00:28:36.063 11:54:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.063 11:54:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:36.063 [2024-07-15 11:54:43.833898] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:36.063 [2024-07-15 11:54:43.833976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161109 ] 00:28:36.063 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.063 [2024-07-15 11:54:43.891758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.063 [2024-07-15 11:54:43.998084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.322 [2024-07-15 11:54:44.187219] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:36.888 11:54:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.888 11:54:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:36.888 11:54:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:36.888 11:54:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:36.888 11:54:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.145 11:54:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:37.146 11:54:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:37.146 11:54:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:37.146 11:54:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:37.146 11:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:37.146 11:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:37.146 11:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.404 11:54:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:37.404 11:54:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:37.404 11:54:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:37.404 11:54:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:37.404 11:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:37.404 11:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.404 11:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:37.660 11:54:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:37.660 11:54:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:37.660 11:54:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:37.660 11:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:37.916 11:54:45 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:37.916 11:54:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:37.916 11:54:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NRTdJb99PS /tmp/tmp.LYxLNNQpeZ 00:28:37.916 11:54:45 keyring_file -- keyring/file.sh@20 -- # killprocess 3161109 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3161109 ']' 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3161109 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3161109 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3161109' 00:28:37.916 killing process with pid 3161109 00:28:37.916 11:54:45 keyring_file -- common/autotest_common.sh@967 -- # kill 3161109 00:28:37.917 Received shutdown signal, test time was about 1.000000 seconds 00:28:37.917 00:28:37.917 Latency(us) 00:28:37.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.917 =================================================================================================================== 00:28:37.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:37.917 11:54:45 keyring_file -- common/autotest_common.sh@972 -- # wait 3161109 00:28:38.172 11:54:46 keyring_file -- keyring/file.sh@21 -- # killprocess 3159634 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3159634 ']' 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3159634 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159634 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159634' 00:28:38.172 killing process with pid 3159634 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@967 -- # kill 3159634 00:28:38.172 [2024-07-15 11:54:46.117117] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:38.172 11:54:46 keyring_file -- common/autotest_common.sh@972 -- # wait 3159634 00:28:38.734 00:28:38.734 real 0m14.184s 00:28:38.734 user 0m35.207s 00:28:38.734 sys 0m3.363s 00:28:38.734 11:54:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.734 11:54:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:38.734 ************************************ 00:28:38.734 END TEST keyring_file 00:28:38.734 ************************************ 00:28:38.734 11:54:46 -- common/autotest_common.sh@1142 -- # return 0 00:28:38.734 11:54:46 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:38.734 11:54:46 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:38.734 11:54:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:38.734 11:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.734 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:28:38.734 ************************************ 00:28:38.734 START TEST keyring_linux 00:28:38.734 ************************************ 00:28:38.734 11:54:46 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:38.734 * Looking for test storage... 00:28:38.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:38.734 11:54:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:38.734 11:54:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.734 11:54:46 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.734 11:54:46 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.734 11:54:46 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.734 11:54:46 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.734 11:54:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.734 11:54:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.734 11:54:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.735 11:54:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:38.735 11:54:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:38.735 11:54:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:38.735 11:54:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:38.735 11:54:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:38.992 /tmp/:spdk-test:key0 00:28:38.992 11:54:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:38.992 11:54:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:38.992 11:54:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:38.992 /tmp/:spdk-test:key1 00:28:38.992 11:54:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3161476 00:28:38.992 11:54:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:38.992 11:54:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3161476 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3161476 ']' 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.992 11:54:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:38.992 [2024-07-15 11:54:46.816697] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:38.992 [2024-07-15 11:54:46.816805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161476 ] 00:28:38.992 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.992 [2024-07-15 11:54:46.878660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.248 [2024-07-15 11:54:46.988478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.248 11:54:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.248 11:54:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:39.248 11:54:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:39.248 11:54:47 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.248 11:54:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:39.248 [2024-07-15 11:54:47.235576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.505 null0 00:28:39.505 [2024-07-15 11:54:47.267606] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:39.505 [2024-07-15 11:54:47.268155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.505 11:54:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:39.505 58087482 00:28:39.505 11:54:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:39.505 725322032 00:28:39.505 11:54:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3161601 00:28:39.505 11:54:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:39.505 11:54:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3161601 /var/tmp/bperf.sock 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3161601 ']' 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.505 11:54:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:39.505 [2024-07-15 11:54:47.329965] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:39.505 [2024-07-15 11:54:47.330046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161601 ] 00:28:39.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.505 [2024-07-15 11:54:47.385569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.506 [2024-07-15 11:54:47.490938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.763 11:54:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.763 11:54:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:39.763 11:54:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:39.763 11:54:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:40.020 11:54:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:40.020 11:54:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.277 11:54:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:40.277 11:54:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:40.534 [2024-07-15 11:54:48.348229] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:40.534 nvme0n1 00:28:40.534 11:54:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:40.534 11:54:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:40.534 11:54:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:40.534 11:54:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:40.534 11:54:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.534 11:54:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:40.791 11:54:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:40.791 11:54:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:40.791 11:54:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:40.791 11:54:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:40.791 11:54:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.791 11:54:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.791 11:54:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@25 -- # sn=58087482 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 58087482 == \5\8\0\8\7\4\8\2 ]] 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 58087482 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:41.063 11:54:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.063 Running I/O for 1 seconds... 00:28:42.450 00:28:42.450 Latency(us) 00:28:42.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.450 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:42.450 nvme0n1 : 1.01 9722.41 37.98 0.00 0.00 13066.76 8301.23 21942.42 00:28:42.450 =================================================================================================================== 00:28:42.450 Total : 9722.41 37.98 0.00 0.00 13066.76 8301.23 21942.42 00:28:42.450 0 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:42.450 11:54:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:42.450 11:54:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.450 11:54:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:42.707 11:54:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:42.707 11:54:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:42.707 11:54:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:42.708 11:54:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:42.708 11:54:50 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:42.708 11:54:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:42.965 [2024-07-15 11:54:50.802306] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:42.965 [2024-07-15 11:54:50.802476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7780 (107): Transport endpoint is not connected 00:28:42.965 [2024-07-15 11:54:50.803468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7780 (9): Bad file descriptor 00:28:42.965 [2024-07-15 11:54:50.804468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.965 [2024-07-15 11:54:50.804492] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:42.965 [2024-07-15 11:54:50.804513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.965 request: 00:28:42.965 { 00:28:42.965 "name": "nvme0", 00:28:42.965 "trtype": "tcp", 00:28:42.965 "traddr": "127.0.0.1", 00:28:42.965 "adrfam": "ipv4", 00:28:42.965 "trsvcid": "4420", 00:28:42.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.965 "prchk_reftag": false, 00:28:42.965 "prchk_guard": false, 00:28:42.965 "hdgst": false, 00:28:42.965 "ddgst": false, 00:28:42.965 "psk": ":spdk-test:key1", 00:28:42.965 "method": "bdev_nvme_attach_controller", 00:28:42.965 "req_id": 1 00:28:42.965 } 00:28:42.965 Got JSON-RPC error response 00:28:42.965 response: 00:28:42.965 { 00:28:42.965 "code": -5, 00:28:42.965 "message": "Input/output error" 00:28:42.965 } 00:28:42.965 11:54:50 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:42.965 11:54:50 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:42.965 11:54:50 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:42.965 11:54:50 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@33 -- # sn=58087482 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 58087482 00:28:42.965 1 links removed 00:28:42.965 11:54:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@33 -- # sn=725322032 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 725322032 00:28:42.966 1 links removed 00:28:42.966 11:54:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3161601 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3161601 ']' 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3161601 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3161601 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3161601' 00:28:42.966 killing process with pid 3161601 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@967 -- # kill 3161601 00:28:42.966 Received shutdown signal, test time was about 1.000000 seconds 00:28:42.966 00:28:42.966 Latency(us) 00:28:42.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.966 =================================================================================================================== 00:28:42.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.966 11:54:50 keyring_linux -- common/autotest_common.sh@972 -- # wait 3161601 00:28:43.222 11:54:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3161476 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3161476 ']' 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3161476 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3161476 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3161476' 00:28:43.222 killing process with pid 3161476 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 3161476 00:28:43.222 11:54:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 3161476 00:28:43.787 00:28:43.787 real 0m4.941s 00:28:43.787 user 0m9.431s 00:28:43.787 sys 0m1.621s 00:28:43.787 11:54:51 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.787 11:54:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:43.787 ************************************ 00:28:43.787 END TEST keyring_linux 00:28:43.787 ************************************ 00:28:43.787 11:54:51 -- common/autotest_common.sh@1142 -- # return 0 00:28:43.787 11:54:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:43.787 11:54:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:43.787 11:54:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:43.787 11:54:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:43.787 11:54:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:43.787 11:54:51 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:43.787 11:54:51 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:43.787 11:54:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:43.787 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:28:43.787 11:54:51 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:43.787 11:54:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:43.787 11:54:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:43.787 11:54:51 -- common/autotest_common.sh@10 -- # set +x 00:28:45.687 INFO: APP EXITING 00:28:45.688 INFO: killing all VMs 00:28:45.688 INFO: killing vhost app 00:28:45.688 INFO: EXIT DONE 00:28:46.622 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:28:46.622 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:46.622 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:46.622 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:46.622 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:46.622 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:46.622 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:46.622 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:46.622 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:46.622 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:46.622 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:46.622 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:46.880 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:46.880 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:46.880 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:46.880 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:46.880 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:48.252 Cleaning 00:28:48.252 Removing: /var/run/dpdk/spdk0/config 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:48.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:48.253 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:48.253 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:48.253 Removing: /var/run/dpdk/spdk1/config 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:48.253 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:48.253 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:48.253 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:48.253 Removing: /var/run/dpdk/spdk2/config 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:48.253 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:48.253 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:48.253 Removing: /var/run/dpdk/spdk3/config 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:48.253 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:48.253 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:48.253 Removing: /var/run/dpdk/spdk4/config 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:48.253 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:48.253 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:48.253 Removing: /dev/shm/bdev_svc_trace.1 00:28:48.253 Removing: /dev/shm/nvmf_trace.0 00:28:48.253 Removing: /dev/shm/spdk_tgt_trace.pid2901170 00:28:48.253 Removing: /var/run/dpdk/spdk0 00:28:48.253 Removing: /var/run/dpdk/spdk1 00:28:48.253 Removing: /var/run/dpdk/spdk2 00:28:48.253 Removing: /var/run/dpdk/spdk3 00:28:48.253 Removing: /var/run/dpdk/spdk4 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2899619 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2900343 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2901170 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2901601 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2902290 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2902430 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2903143 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2903158 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2903401 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2904712 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2905637 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2905942 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2906127 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2906335 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2906645 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2906812 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2906965 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2907145 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2907456 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2909807 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2909971 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2910137 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2910251 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2910571 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2910586 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911008 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911027 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911306 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911321 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911491 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911612 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2911981 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912137 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912457 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912624 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912646 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912833 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2912989 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2913146 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2913418 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2913581 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2913737 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914004 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914171 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914329 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914547 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914759 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2914917 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2915069 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2915347 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2915504 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2915663 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2915929 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2916100 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2916255 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2916529 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2916687 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2916838 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2917047 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2919162 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2945810 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2948442 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2955427 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2958749 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2961109 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2962136 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2966009 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2969870 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2969872 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2970522 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2971185 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2971729 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2972124 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2972174 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2972392 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2972525 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2972531 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2973139 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2973721 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2974384 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2974784 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2974788 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2975048 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2975940 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2976663 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2982196 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2982468 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2985123 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2988836 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2991515 00:28:48.253 Removing: /var/run/dpdk/spdk_pid2998064 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3003294 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3004498 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3005162 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3015528 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3017756 00:28:48.253 Removing: /var/run/dpdk/spdk_pid3042320 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3045235 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3046311 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3047615 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3047756 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3047892 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3048032 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3048347 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3049731 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3050504 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3051432 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3053041 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3053426 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3053915 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3056446 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3062498 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3065156 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3069054 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3070002 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3071091 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3073793 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3076168 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3080404 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3080412 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3083329 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3083469 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3083602 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3083871 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3083985 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3086698 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3087208 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3090267 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3092249 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3095678 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3099014 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3105528 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3110020 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3110022 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3122372 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3122803 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3123642 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3124238 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3124822 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3125229 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3125643 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3126171 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3128568 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3128821 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3132631 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3132742 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3134410 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3139348 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3139353 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3142270 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3143674 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3145075 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3145937 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3147342 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3148218 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3153706 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3154039 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3154798 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3156504 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3156899 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3157182 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3159634 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3159651 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3161109 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3161476 00:28:48.511 Removing: /var/run/dpdk/spdk_pid3161601 00:28:48.511 Clean 00:28:48.511 11:54:56 -- common/autotest_common.sh@1451 -- # return 0 00:28:48.511 11:54:56 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:48.511 11:54:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:48.511 11:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:48.512 11:54:56 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:48.512 11:54:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:48.512 11:54:56 -- common/autotest_common.sh@10 -- # set +x 00:28:48.512 11:54:56 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:48.512 11:54:56 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:48.512 11:54:56 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:48.512 11:54:56 -- spdk/autotest.sh@391 -- # hash lcov 00:28:48.512 11:54:56 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:48.512 11:54:56 -- spdk/autotest.sh@393 -- # hostname 00:28:48.512 11:54:56 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:48.770 geninfo: WARNING: invalid characters removed from testname! 00:29:20.827 11:55:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:20.827 11:55:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:24.105 11:55:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:26.630 11:55:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:29.907 11:55:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:32.463 11:55:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:35.738 11:55:43 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:35.738 11:55:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.738 11:55:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:35.738 11:55:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.738 11:55:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.738 11:55:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.738 11:55:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.738 11:55:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.738 11:55:43 -- paths/export.sh@5 -- $ export PATH 00:29:35.738 11:55:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.738 11:55:43 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:35.738 11:55:43 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:35.738 11:55:43 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721037343.XXXXXX 00:29:35.738 11:55:43 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721037343.oiHAHc 00:29:35.738 11:55:43 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:35.738 11:55:43 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:35.738 11:55:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:35.738 11:55:43 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:35.738 11:55:43 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:35.738 11:55:43 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:35.738 11:55:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:35.738 11:55:43 -- common/autotest_common.sh@10 -- $ set +x 00:29:35.738 11:55:43 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:35.738 11:55:43 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:35.738 11:55:43 -- pm/common@17 -- $ local monitor 00:29:35.738 11:55:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.738 11:55:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.738 11:55:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.738 11:55:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.738 11:55:43 -- pm/common@21 -- $ date +%s 00:29:35.738 11:55:43 -- pm/common@25 -- $ sleep 1 00:29:35.738 11:55:43 -- pm/common@21 -- $ date +%s 00:29:35.738 11:55:43 -- pm/common@21 -- $ date +%s 00:29:35.738 11:55:43 -- pm/common@21 -- $ date +%s 00:29:35.738 11:55:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037343 00:29:35.738 11:55:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037343 00:29:35.738 11:55:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037343 00:29:35.738 11:55:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037343 00:29:35.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037343_collect-vmstat.pm.log 00:29:35.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037343_collect-cpu-load.pm.log 00:29:35.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037343_collect-cpu-temp.pm.log 00:29:35.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037343_collect-bmc-pm.bmc.pm.log 00:29:36.328 11:55:44 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:36.328 11:55:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:36.328 11:55:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:36.328 11:55:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:36.328 11:55:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:36.328 11:55:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:36.328 11:55:44 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:36.328 11:55:44 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:36.328 11:55:44 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:36.328 11:55:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:36.328 11:55:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:36.328 11:55:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:36.328 11:55:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:36.328 11:55:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.328 11:55:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:36.328 11:55:44 -- pm/common@44 -- $ pid=3171216 00:29:36.328 11:55:44 -- pm/common@50 -- $ kill -TERM 3171216 00:29:36.328 11:55:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.328 11:55:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:36.328 11:55:44 -- pm/common@44 -- $ pid=3171217 00:29:36.328 11:55:44 -- pm/common@50 -- $ kill -TERM 3171217 00:29:36.328 11:55:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.328 11:55:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:36.328 11:55:44 -- pm/common@44 -- $ pid=3171219 00:29:36.328 11:55:44 -- pm/common@50 -- $ kill -TERM 3171219 00:29:36.328 11:55:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.328 11:55:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:36.328 11:55:44 -- pm/common@44 -- $ pid=3171248 00:29:36.328 11:55:44 -- pm/common@50 -- $ sudo -E kill -TERM 3171248 00:29:36.328 + [[ -n 2815684 ]] 00:29:36.328 + sudo kill 2815684 00:29:36.339 [Pipeline] } 00:29:36.357 [Pipeline] // stage 00:29:36.362 [Pipeline] } 00:29:36.381 [Pipeline] // timeout 00:29:36.386 [Pipeline] } 00:29:36.410 [Pipeline] // catchError 00:29:36.417 [Pipeline] } 00:29:36.437 [Pipeline] // wrap 00:29:36.443 [Pipeline] } 00:29:36.461 [Pipeline] // catchError 00:29:36.469 [Pipeline] stage 00:29:36.471 [Pipeline] { (Epilogue) 00:29:36.485 [Pipeline] catchError 00:29:36.487 [Pipeline] { 00:29:36.501 [Pipeline] echo 00:29:36.502 Cleanup processes 00:29:36.509 [Pipeline] sh 00:29:36.797 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:36.797 3171351 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:36.797 3171480 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:36.812 [Pipeline] sh 00:29:37.096 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:37.096 ++ grep -v 'sudo pgrep' 00:29:37.096 ++ awk '{print $1}' 00:29:37.096 + sudo kill -9 3171351 00:29:37.108 [Pipeline] sh 00:29:37.393 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:45.531 [Pipeline] sh 00:29:45.817 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:45.817 Artifacts sizes are good 00:29:45.832 [Pipeline] archiveArtifacts 00:29:45.839 Archiving artifacts 00:29:46.051 [Pipeline] sh 00:29:46.336 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:46.352 [Pipeline] cleanWs 00:29:46.363 [WS-CLEANUP] Deleting project workspace... 00:29:46.363 [WS-CLEANUP] Deferred wipeout is used... 00:29:46.370 [WS-CLEANUP] done 00:29:46.373 [Pipeline] } 00:29:46.397 [Pipeline] // catchError 00:29:46.412 [Pipeline] sh 00:29:46.693 + logger -p user.info -t JENKINS-CI 00:29:46.703 [Pipeline] } 00:29:46.721 [Pipeline] // stage 00:29:46.727 [Pipeline] } 00:29:46.745 [Pipeline] // node 00:29:46.751 [Pipeline] End of Pipeline 00:29:46.786 Finished: SUCCESS